Test Report: KVM_Linux_crio 19522

                    
                      d15490255971b1813e1f056874620592048fd695:2024-08-27:35972
                    
                

Test fail (11/207)

x
+
TestAddons/Setup (2400.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-709833 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-709833 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.959436611s)

                                                
                                                
-- stdout --
	* [addons-709833] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-709833" primary control-plane node in "addons-709833" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* Verifying registry addon...
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-709833 service yakd-dashboard -n yakd-dashboard
	
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-709833 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 21:38:04.852086   15495 out.go:345] Setting OutFile to fd 1 ...
	I0827 21:38:04.852300   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:38:04.852309   15495 out.go:358] Setting ErrFile to fd 2...
	I0827 21:38:04.852314   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:38:04.852487   15495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 21:38:04.853027   15495 out.go:352] Setting JSON to false
	I0827 21:38:04.854316   15495 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1232,"bootTime":1724793453,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 21:38:04.854524   15495 start.go:139] virtualization: kvm guest
	I0827 21:38:04.856712   15495 out.go:177] * [addons-709833] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 21:38:04.857883   15495 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 21:38:04.857904   15495 notify.go:220] Checking for updates...
	I0827 21:38:04.860181   15495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 21:38:04.861287   15495 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 21:38:04.862448   15495 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 21:38:04.863713   15495 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 21:38:04.865037   15495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 21:38:04.866506   15495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 21:38:04.897513   15495 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 21:38:04.898953   15495 start.go:297] selected driver: kvm2
	I0827 21:38:04.898979   15495 start.go:901] validating driver "kvm2" against <nil>
	I0827 21:38:04.898996   15495 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 21:38:04.899686   15495 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:38:04.899773   15495 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 21:38:04.914622   15495 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 21:38:04.914673   15495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 21:38:04.914937   15495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 21:38:04.915019   15495 cni.go:84] Creating CNI manager for ""
	I0827 21:38:04.915034   15495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 21:38:04.915049   15495 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 21:38:04.915129   15495 start.go:340] cluster config:
	{Name:addons-709833 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-709833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 21:38:04.915273   15495 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:38:04.917107   15495 out.go:177] * Starting "addons-709833" primary control-plane node in "addons-709833" cluster
	I0827 21:38:04.918443   15495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 21:38:04.918480   15495 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 21:38:04.918491   15495 cache.go:56] Caching tarball of preloaded images
	I0827 21:38:04.918575   15495 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 21:38:04.918586   15495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 21:38:04.918933   15495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/config.json ...
	I0827 21:38:04.918960   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/config.json: {Name:mk475d436ef2618f055df737842b165fd3cf9a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:04.919096   15495 start.go:360] acquireMachinesLock for addons-709833: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 21:38:04.919150   15495 start.go:364] duration metric: took 36.975µs to acquireMachinesLock for "addons-709833"
	I0827 21:38:04.919174   15495 start.go:93] Provisioning new machine with config: &{Name:addons-709833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-709833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 21:38:04.919234   15495 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 21:38:04.921031   15495 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0827 21:38:04.921156   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:04.921199   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:04.935272   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0827 21:38:04.935679   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:04.936199   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:04.936224   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:04.936652   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:04.936870   15495 main.go:141] libmachine: (addons-709833) Calling .GetMachineName
	I0827 21:38:04.937029   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:04.937180   15495 start.go:159] libmachine.API.Create for "addons-709833" (driver="kvm2")
	I0827 21:38:04.937207   15495 client.go:168] LocalClient.Create starting
	I0827 21:38:04.937245   15495 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 21:38:05.187876   15495 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 21:38:05.404983   15495 main.go:141] libmachine: Running pre-create checks...
	I0827 21:38:05.405007   15495 main.go:141] libmachine: (addons-709833) Calling .PreCreateCheck
	I0827 21:38:05.405538   15495 main.go:141] libmachine: (addons-709833) Calling .GetConfigRaw
	I0827 21:38:05.405968   15495 main.go:141] libmachine: Creating machine...
	I0827 21:38:05.405983   15495 main.go:141] libmachine: (addons-709833) Calling .Create
	I0827 21:38:05.406147   15495 main.go:141] libmachine: (addons-709833) Creating KVM machine...
	I0827 21:38:05.407267   15495 main.go:141] libmachine: (addons-709833) DBG | found existing default KVM network
	I0827 21:38:05.407955   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:05.407825   15518 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0827 21:38:05.407997   15495 main.go:141] libmachine: (addons-709833) DBG | created network xml: 
	I0827 21:38:05.408019   15495 main.go:141] libmachine: (addons-709833) DBG | <network>
	I0827 21:38:05.408035   15495 main.go:141] libmachine: (addons-709833) DBG |   <name>mk-addons-709833</name>
	I0827 21:38:05.408043   15495 main.go:141] libmachine: (addons-709833) DBG |   <dns enable='no'/>
	I0827 21:38:05.408052   15495 main.go:141] libmachine: (addons-709833) DBG |   
	I0827 21:38:05.408065   15495 main.go:141] libmachine: (addons-709833) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0827 21:38:05.408087   15495 main.go:141] libmachine: (addons-709833) DBG |     <dhcp>
	I0827 21:38:05.408123   15495 main.go:141] libmachine: (addons-709833) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0827 21:38:05.408150   15495 main.go:141] libmachine: (addons-709833) DBG |     </dhcp>
	I0827 21:38:05.408159   15495 main.go:141] libmachine: (addons-709833) DBG |   </ip>
	I0827 21:38:05.408168   15495 main.go:141] libmachine: (addons-709833) DBG |   
	I0827 21:38:05.408174   15495 main.go:141] libmachine: (addons-709833) DBG | </network>
	I0827 21:38:05.408179   15495 main.go:141] libmachine: (addons-709833) DBG | 
	I0827 21:38:05.413512   15495 main.go:141] libmachine: (addons-709833) DBG | trying to create private KVM network mk-addons-709833 192.168.39.0/24...
	I0827 21:38:05.477203   15495 main.go:141] libmachine: (addons-709833) DBG | private KVM network mk-addons-709833 192.168.39.0/24 created
	I0827 21:38:05.477240   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:05.477170   15518 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 21:38:05.477253   15495 main.go:141] libmachine: (addons-709833) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833 ...
	I0827 21:38:05.477273   15495 main.go:141] libmachine: (addons-709833) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 21:38:05.477423   15495 main.go:141] libmachine: (addons-709833) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 21:38:05.733630   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:05.733494   15518 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa...
	I0827 21:38:05.910575   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:05.910429   15518 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/addons-709833.rawdisk...
	I0827 21:38:05.910615   15495 main.go:141] libmachine: (addons-709833) DBG | Writing magic tar header
	I0827 21:38:05.910628   15495 main.go:141] libmachine: (addons-709833) DBG | Writing SSH key tar header
	I0827 21:38:05.910636   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:05.910558   15518 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833 ...
	I0827 21:38:05.910716   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833
	I0827 21:38:05.910737   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833 (perms=drwx------)
	I0827 21:38:05.910748   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 21:38:05.910762   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 21:38:05.910771   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 21:38:05.910782   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 21:38:05.910796   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home/jenkins
	I0827 21:38:05.910811   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 21:38:05.910830   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 21:38:05.910843   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 21:38:05.910853   15495 main.go:141] libmachine: (addons-709833) DBG | Checking permissions on dir: /home
	I0827 21:38:05.910870   15495 main.go:141] libmachine: (addons-709833) DBG | Skipping /home - not owner
	I0827 21:38:05.910905   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 21:38:05.910926   15495 main.go:141] libmachine: (addons-709833) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 21:38:05.910935   15495 main.go:141] libmachine: (addons-709833) Creating domain...
	I0827 21:38:05.911842   15495 main.go:141] libmachine: (addons-709833) define libvirt domain using xml: 
	I0827 21:38:05.911857   15495 main.go:141] libmachine: (addons-709833) <domain type='kvm'>
	I0827 21:38:05.911867   15495 main.go:141] libmachine: (addons-709833)   <name>addons-709833</name>
	I0827 21:38:05.911873   15495 main.go:141] libmachine: (addons-709833)   <memory unit='MiB'>4000</memory>
	I0827 21:38:05.911878   15495 main.go:141] libmachine: (addons-709833)   <vcpu>2</vcpu>
	I0827 21:38:05.911882   15495 main.go:141] libmachine: (addons-709833)   <features>
	I0827 21:38:05.911897   15495 main.go:141] libmachine: (addons-709833)     <acpi/>
	I0827 21:38:05.911902   15495 main.go:141] libmachine: (addons-709833)     <apic/>
	I0827 21:38:05.911907   15495 main.go:141] libmachine: (addons-709833)     <pae/>
	I0827 21:38:05.911914   15495 main.go:141] libmachine: (addons-709833)     
	I0827 21:38:05.911923   15495 main.go:141] libmachine: (addons-709833)   </features>
	I0827 21:38:05.911940   15495 main.go:141] libmachine: (addons-709833)   <cpu mode='host-passthrough'>
	I0827 21:38:05.911949   15495 main.go:141] libmachine: (addons-709833)   
	I0827 21:38:05.911961   15495 main.go:141] libmachine: (addons-709833)   </cpu>
	I0827 21:38:05.911970   15495 main.go:141] libmachine: (addons-709833)   <os>
	I0827 21:38:05.911976   15495 main.go:141] libmachine: (addons-709833)     <type>hvm</type>
	I0827 21:38:05.911983   15495 main.go:141] libmachine: (addons-709833)     <boot dev='cdrom'/>
	I0827 21:38:05.911989   15495 main.go:141] libmachine: (addons-709833)     <boot dev='hd'/>
	I0827 21:38:05.911996   15495 main.go:141] libmachine: (addons-709833)     <bootmenu enable='no'/>
	I0827 21:38:05.912001   15495 main.go:141] libmachine: (addons-709833)   </os>
	I0827 21:38:05.912015   15495 main.go:141] libmachine: (addons-709833)   <devices>
	I0827 21:38:05.912023   15495 main.go:141] libmachine: (addons-709833)     <disk type='file' device='cdrom'>
	I0827 21:38:05.912030   15495 main.go:141] libmachine: (addons-709833)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/boot2docker.iso'/>
	I0827 21:38:05.912039   15495 main.go:141] libmachine: (addons-709833)       <target dev='hdc' bus='scsi'/>
	I0827 21:38:05.912043   15495 main.go:141] libmachine: (addons-709833)       <readonly/>
	I0827 21:38:05.912048   15495 main.go:141] libmachine: (addons-709833)     </disk>
	I0827 21:38:05.912056   15495 main.go:141] libmachine: (addons-709833)     <disk type='file' device='disk'>
	I0827 21:38:05.912081   15495 main.go:141] libmachine: (addons-709833)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 21:38:05.912106   15495 main.go:141] libmachine: (addons-709833)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/addons-709833.rawdisk'/>
	I0827 21:38:05.912122   15495 main.go:141] libmachine: (addons-709833)       <target dev='hda' bus='virtio'/>
	I0827 21:38:05.912132   15495 main.go:141] libmachine: (addons-709833)     </disk>
	I0827 21:38:05.912142   15495 main.go:141] libmachine: (addons-709833)     <interface type='network'>
	I0827 21:38:05.912153   15495 main.go:141] libmachine: (addons-709833)       <source network='mk-addons-709833'/>
	I0827 21:38:05.912161   15495 main.go:141] libmachine: (addons-709833)       <model type='virtio'/>
	I0827 21:38:05.912173   15495 main.go:141] libmachine: (addons-709833)     </interface>
	I0827 21:38:05.912187   15495 main.go:141] libmachine: (addons-709833)     <interface type='network'>
	I0827 21:38:05.912199   15495 main.go:141] libmachine: (addons-709833)       <source network='default'/>
	I0827 21:38:05.912211   15495 main.go:141] libmachine: (addons-709833)       <model type='virtio'/>
	I0827 21:38:05.912230   15495 main.go:141] libmachine: (addons-709833)     </interface>
	I0827 21:38:05.912252   15495 main.go:141] libmachine: (addons-709833)     <serial type='pty'>
	I0827 21:38:05.912269   15495 main.go:141] libmachine: (addons-709833)       <target port='0'/>
	I0827 21:38:05.912278   15495 main.go:141] libmachine: (addons-709833)     </serial>
	I0827 21:38:05.912290   15495 main.go:141] libmachine: (addons-709833)     <console type='pty'>
	I0827 21:38:05.912304   15495 main.go:141] libmachine: (addons-709833)       <target type='serial' port='0'/>
	I0827 21:38:05.912324   15495 main.go:141] libmachine: (addons-709833)     </console>
	I0827 21:38:05.912341   15495 main.go:141] libmachine: (addons-709833)     <rng model='virtio'>
	I0827 21:38:05.912349   15495 main.go:141] libmachine: (addons-709833)       <backend model='random'>/dev/random</backend>
	I0827 21:38:05.912366   15495 main.go:141] libmachine: (addons-709833)     </rng>
	I0827 21:38:05.912376   15495 main.go:141] libmachine: (addons-709833)     
	I0827 21:38:05.912384   15495 main.go:141] libmachine: (addons-709833)     
	I0827 21:38:05.912397   15495 main.go:141] libmachine: (addons-709833)   </devices>
	I0827 21:38:05.912408   15495 main.go:141] libmachine: (addons-709833) </domain>
	I0827 21:38:05.912417   15495 main.go:141] libmachine: (addons-709833) 
	I0827 21:38:05.918085   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:4f:0a:46 in network default
	I0827 21:38:05.918667   15495 main.go:141] libmachine: (addons-709833) Ensuring networks are active...
	I0827 21:38:05.918683   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:05.919230   15495 main.go:141] libmachine: (addons-709833) Ensuring network default is active
	I0827 21:38:05.919522   15495 main.go:141] libmachine: (addons-709833) Ensuring network mk-addons-709833 is active
	I0827 21:38:05.919927   15495 main.go:141] libmachine: (addons-709833) Getting domain xml...
	I0827 21:38:05.920523   15495 main.go:141] libmachine: (addons-709833) Creating domain...
	I0827 21:38:07.301106   15495 main.go:141] libmachine: (addons-709833) Waiting to get IP...
	I0827 21:38:07.302050   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:07.302494   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:07.302539   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:07.302485   15518 retry.go:31] will retry after 273.087146ms: waiting for machine to come up
	I0827 21:38:07.576696   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:07.577077   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:07.577099   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:07.577034   15518 retry.go:31] will retry after 304.073929ms: waiting for machine to come up
	I0827 21:38:07.882542   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:07.882885   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:07.882921   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:07.882854   15518 retry.go:31] will retry after 418.664289ms: waiting for machine to come up
	I0827 21:38:08.303568   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:08.304040   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:08.304063   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:08.304002   15518 retry.go:31] will retry after 604.562189ms: waiting for machine to come up
	I0827 21:38:08.910709   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:08.911196   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:08.911225   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:08.911146   15518 retry.go:31] will retry after 528.612469ms: waiting for machine to come up
	I0827 21:38:09.440956   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:09.441315   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:09.441341   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:09.441289   15518 retry.go:31] will retry after 687.425632ms: waiting for machine to come up
	I0827 21:38:10.130916   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:10.131276   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:10.131300   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:10.131235   15518 retry.go:31] will retry after 1.08366497s: waiting for machine to come up
	I0827 21:38:11.216135   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:11.216520   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:11.216550   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:11.216496   15518 retry.go:31] will retry after 1.36125242s: waiting for machine to come up
	I0827 21:38:12.579040   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:12.579407   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:12.579435   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:12.579367   15518 retry.go:31] will retry after 1.71621317s: waiting for machine to come up
	I0827 21:38:14.297586   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:14.297995   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:14.298022   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:14.297943   15518 retry.go:31] will retry after 2.012848271s: waiting for machine to come up
	I0827 21:38:16.313054   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:16.313475   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:16.313498   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:16.313434   15518 retry.go:31] will retry after 1.951498815s: waiting for machine to come up
	I0827 21:38:18.267437   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:18.267784   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:18.267807   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:18.267743   15518 retry.go:31] will retry after 2.860021141s: waiting for machine to come up
	I0827 21:38:21.129144   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:21.129497   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:21.129543   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:21.129453   15518 retry.go:31] will retry after 3.155628253s: waiting for machine to come up
	I0827 21:38:24.288707   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:24.289144   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find current IP address of domain addons-709833 in network mk-addons-709833
	I0827 21:38:24.289172   15495 main.go:141] libmachine: (addons-709833) DBG | I0827 21:38:24.289109   15518 retry.go:31] will retry after 3.532731738s: waiting for machine to come up
	I0827 21:38:27.825523   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:27.825951   15495 main.go:141] libmachine: (addons-709833) Found IP for machine: 192.168.39.186
	I0827 21:38:27.825986   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has current primary IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:27.825997   15495 main.go:141] libmachine: (addons-709833) Reserving static IP address...
	I0827 21:38:27.826344   15495 main.go:141] libmachine: (addons-709833) DBG | unable to find host DHCP lease matching {name: "addons-709833", mac: "52:54:00:be:dd:69", ip: "192.168.39.186"} in network mk-addons-709833
	I0827 21:38:27.894884   15495 main.go:141] libmachine: (addons-709833) DBG | Getting to WaitForSSH function...
	I0827 21:38:27.894914   15495 main.go:141] libmachine: (addons-709833) Reserved static IP address: 192.168.39.186
	I0827 21:38:27.894932   15495 main.go:141] libmachine: (addons-709833) Waiting for SSH to be available...
	I0827 21:38:27.897302   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:27.897726   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:27.897771   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:27.897879   15495 main.go:141] libmachine: (addons-709833) DBG | Using SSH client type: external
	I0827 21:38:27.897899   15495 main.go:141] libmachine: (addons-709833) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa (-rw-------)
	I0827 21:38:27.897999   15495 main.go:141] libmachine: (addons-709833) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 21:38:27.898019   15495 main.go:141] libmachine: (addons-709833) DBG | About to run SSH command:
	I0827 21:38:27.898032   15495 main.go:141] libmachine: (addons-709833) DBG | exit 0
	I0827 21:38:28.028341   15495 main.go:141] libmachine: (addons-709833) DBG | SSH cmd err, output: <nil>: 
	I0827 21:38:28.028606   15495 main.go:141] libmachine: (addons-709833) KVM machine creation complete!
	I0827 21:38:28.028898   15495 main.go:141] libmachine: (addons-709833) Calling .GetConfigRaw
	I0827 21:38:28.029991   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:28.030802   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:28.031017   15495 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 21:38:28.031036   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:28.032537   15495 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 21:38:28.032559   15495 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 21:38:28.032568   15495 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 21:38:28.032578   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.035319   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.035692   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.035725   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.035843   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.036027   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.036197   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.036331   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.036483   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:28.036768   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:28.036785   15495 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 21:38:28.135641   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 21:38:28.135671   15495 main.go:141] libmachine: Detecting the provisioner...
	I0827 21:38:28.135682   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.138308   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.138647   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.138675   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.138884   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.139086   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.139269   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.139432   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.139592   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:28.139768   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:28.139781   15495 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 21:38:28.240835   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 21:38:28.240920   15495 main.go:141] libmachine: found compatible host: buildroot
	I0827 21:38:28.240935   15495 main.go:141] libmachine: Provisioning with buildroot...
	I0827 21:38:28.240948   15495 main.go:141] libmachine: (addons-709833) Calling .GetMachineName
	I0827 21:38:28.241213   15495 buildroot.go:166] provisioning hostname "addons-709833"
	I0827 21:38:28.241235   15495 main.go:141] libmachine: (addons-709833) Calling .GetMachineName
	I0827 21:38:28.241446   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.244059   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.244427   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.244455   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.244613   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.244849   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.245013   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.245207   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.245362   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:28.245524   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:28.245537   15495 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-709833 && echo "addons-709833" | sudo tee /etc/hostname
	I0827 21:38:28.358281   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-709833
	
	I0827 21:38:28.358305   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.360565   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.360934   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.360965   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.361114   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.361297   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.361475   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.361608   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.361749   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:28.361956   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:28.361974   15495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-709833' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-709833/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-709833' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 21:38:28.468822   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 21:38:28.468859   15495 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 21:38:28.468909   15495 buildroot.go:174] setting up certificates
	I0827 21:38:28.468932   15495 provision.go:84] configureAuth start
	I0827 21:38:28.468952   15495 main.go:141] libmachine: (addons-709833) Calling .GetMachineName
	I0827 21:38:28.469371   15495 main.go:141] libmachine: (addons-709833) Calling .GetIP
	I0827 21:38:28.471958   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.472364   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.472398   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.472564   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.475063   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.475433   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.475450   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.475615   15495 provision.go:143] copyHostCerts
	I0827 21:38:28.475691   15495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 21:38:28.475853   15495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 21:38:28.475938   15495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 21:38:28.476042   15495 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.addons-709833 san=[127.0.0.1 192.168.39.186 addons-709833 localhost minikube]
	I0827 21:38:28.608679   15495 provision.go:177] copyRemoteCerts
	I0827 21:38:28.608732   15495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 21:38:28.608754   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.611070   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.611349   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.611374   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.611575   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.611762   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.611910   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.612037   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:28.690188   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 21:38:28.713369   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 21:38:28.736932   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 21:38:28.759185   15495 provision.go:87] duration metric: took 290.238327ms to configureAuth
	I0827 21:38:28.759214   15495 buildroot.go:189] setting minikube options for container-runtime
	I0827 21:38:28.759427   15495 config.go:182] Loaded profile config "addons-709833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 21:38:28.759513   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.761973   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.762385   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.762422   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.762553   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.762762   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.762937   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.763048   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.763195   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:28.763368   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:28.763382   15495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 21:38:28.983368   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 21:38:28.983398   15495 main.go:141] libmachine: Checking connection to Docker...
	I0827 21:38:28.983408   15495 main.go:141] libmachine: (addons-709833) Calling .GetURL
	I0827 21:38:28.984909   15495 main.go:141] libmachine: (addons-709833) DBG | Using libvirt version 6000000
	I0827 21:38:28.987189   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.987478   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.987507   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.987658   15495 main.go:141] libmachine: Docker is up and running!
	I0827 21:38:28.987673   15495 main.go:141] libmachine: Reticulating splines...
	I0827 21:38:28.987682   15495 client.go:171] duration metric: took 24.05046753s to LocalClient.Create
	I0827 21:38:28.987704   15495 start.go:167] duration metric: took 24.050524446s to libmachine.API.Create "addons-709833"
	I0827 21:38:28.987717   15495 start.go:293] postStartSetup for "addons-709833" (driver="kvm2")
	I0827 21:38:28.987730   15495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 21:38:28.987753   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:28.987997   15495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 21:38:28.988018   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:28.990192   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.990544   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:28.990586   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:28.990676   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:28.990855   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:28.991023   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:28.991170   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:29.070411   15495 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 21:38:29.074280   15495 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 21:38:29.074305   15495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 21:38:29.074385   15495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 21:38:29.074415   15495 start.go:296] duration metric: took 86.690846ms for postStartSetup
	I0827 21:38:29.074455   15495 main.go:141] libmachine: (addons-709833) Calling .GetConfigRaw
	I0827 21:38:29.074968   15495 main.go:141] libmachine: (addons-709833) Calling .GetIP
	I0827 21:38:29.077408   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.077766   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:29.077798   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.077997   15495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/config.json ...
	I0827 21:38:29.078197   15495 start.go:128] duration metric: took 24.158952001s to createHost
	I0827 21:38:29.078219   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:29.080372   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.080689   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:29.080713   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.080909   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:29.081105   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:29.081289   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:29.081448   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:29.081639   15495 main.go:141] libmachine: Using SSH client type: native
	I0827 21:38:29.081786   15495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0827 21:38:29.081796   15495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 21:38:29.180726   15495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724794709.141985547
	
	I0827 21:38:29.180748   15495 fix.go:216] guest clock: 1724794709.141985547
	I0827 21:38:29.180756   15495 fix.go:229] Guest: 2024-08-27 21:38:29.141985547 +0000 UTC Remote: 2024-08-27 21:38:29.078208346 +0000 UTC m=+24.257636282 (delta=63.777201ms)
	I0827 21:38:29.180812   15495 fix.go:200] guest clock delta is within tolerance: 63.777201ms
	I0827 21:38:29.180819   15495 start.go:83] releasing machines lock for "addons-709833", held for 24.261659471s
	I0827 21:38:29.180842   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:29.181093   15495 main.go:141] libmachine: (addons-709833) Calling .GetIP
	I0827 21:38:29.183384   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.183683   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:29.183708   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.183825   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:29.184390   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:29.184580   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:29.184682   15495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 21:38:29.184722   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:29.184758   15495 ssh_runner.go:195] Run: cat /version.json
	I0827 21:38:29.184777   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:29.187376   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.187400   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.187715   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:29.187750   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.187779   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:29.187798   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:29.187907   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:29.188042   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:29.188093   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:29.188194   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:29.188259   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:29.188379   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:29.188438   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:29.188549   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:29.261628   15495 ssh_runner.go:195] Run: systemctl --version
	I0827 21:38:29.301761   15495 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 21:38:29.459838   15495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 21:38:29.466003   15495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 21:38:29.466071   15495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 21:38:29.481157   15495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 21:38:29.481182   15495 start.go:495] detecting cgroup driver to use...
	I0827 21:38:29.481243   15495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 21:38:29.496207   15495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 21:38:29.509337   15495 docker.go:217] disabling cri-docker service (if available) ...
	I0827 21:38:29.509404   15495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 21:38:29.522014   15495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 21:38:29.534727   15495 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 21:38:29.646806   15495 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 21:38:29.817602   15495 docker.go:233] disabling docker service ...
	I0827 21:38:29.817679   15495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 21:38:29.830881   15495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 21:38:29.842800   15495 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 21:38:29.959539   15495 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 21:38:30.076297   15495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 21:38:30.089671   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 21:38:30.106946   15495 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 21:38:30.107008   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.117923   15495 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 21:38:30.117995   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.128129   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.138090   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.148081   15495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 21:38:30.157955   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.167623   15495 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.183524   15495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 21:38:30.193009   15495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 21:38:30.201551   15495 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 21:38:30.201614   15495 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 21:38:30.216907   15495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 21:38:30.228483   15495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 21:38:30.344184   15495 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 21:38:30.432961   15495 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 21:38:30.433062   15495 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 21:38:30.437267   15495 start.go:563] Will wait 60s for crictl version
	I0827 21:38:30.437346   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:38:30.440695   15495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 21:38:30.476562   15495 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 21:38:30.476697   15495 ssh_runner.go:195] Run: crio --version
	I0827 21:38:30.502558   15495 ssh_runner.go:195] Run: crio --version
	I0827 21:38:30.533295   15495 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 21:38:30.534642   15495 main.go:141] libmachine: (addons-709833) Calling .GetIP
	I0827 21:38:30.537195   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:30.537510   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:30.537538   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:30.537822   15495 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 21:38:30.541610   15495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 21:38:30.553147   15495 kubeadm.go:883] updating cluster {Name:addons-709833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-709833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 21:38:30.553275   15495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 21:38:30.553318   15495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 21:38:30.587221   15495 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0827 21:38:30.587281   15495 ssh_runner.go:195] Run: which lz4
	I0827 21:38:30.590983   15495 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 21:38:30.594810   15495 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 21:38:30.594838   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0827 21:38:31.745983   15495 crio.go:462] duration metric: took 1.155024866s to copy over tarball
	I0827 21:38:31.746071   15495 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 21:38:33.867228   15495 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121124009s)
	I0827 21:38:33.867256   15495 crio.go:469] duration metric: took 2.121245483s to extract the tarball
	I0827 21:38:33.867264   15495 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 21:38:33.902918   15495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 21:38:33.941966   15495 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 21:38:33.941989   15495 cache_images.go:84] Images are preloaded, skipping loading
	I0827 21:38:33.941998   15495 kubeadm.go:934] updating node { 192.168.39.186 8443 v1.31.0 crio true true} ...
	I0827 21:38:33.942097   15495 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-709833 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-709833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 21:38:33.942158   15495 ssh_runner.go:195] Run: crio config
	I0827 21:38:33.982281   15495 cni.go:84] Creating CNI manager for ""
	I0827 21:38:33.982300   15495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 21:38:33.982310   15495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 21:38:33.982335   15495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.186 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-709833 NodeName:addons-709833 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 21:38:33.982498   15495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.186
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-709833"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 21:38:33.982577   15495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 21:38:33.991789   15495 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 21:38:33.991842   15495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 21:38:34.000681   15495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0827 21:38:34.016649   15495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 21:38:34.031226   15495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0827 21:38:34.045978   15495 ssh_runner.go:195] Run: grep 192.168.39.186	control-plane.minikube.internal$ /etc/hosts
	I0827 21:38:34.049298   15495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 21:38:34.060125   15495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 21:38:34.176712   15495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 21:38:34.192479   15495 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833 for IP: 192.168.39.186
	I0827 21:38:34.192505   15495 certs.go:194] generating shared ca certs ...
	I0827 21:38:34.192526   15495 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.192696   15495 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 21:38:34.409816   15495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt ...
	I0827 21:38:34.409843   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt: {Name:mkc780eff1451d44a34b31ca45815e39dea29cc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.409995   15495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key ...
	I0827 21:38:34.410005   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key: {Name:mk2b1ed53a08752685d29abf0764ff8e9fb6fc27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.410069   15495 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 21:38:34.488138   15495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt ...
	I0827 21:38:34.488165   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt: {Name:mk43bd027c324d2cc5253758aaad672b3f03b205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.488313   15495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key ...
	I0827 21:38:34.488323   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key: {Name:mk76645bf7290035e97b0c5e0f13bf5822d9a6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.488899   15495 certs.go:256] generating profile certs ...
	I0827 21:38:34.488956   15495 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.key
	I0827 21:38:34.488978   15495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.crt with IP's: []
	I0827 21:38:34.597126   15495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.crt ...
	I0827 21:38:34.597152   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.crt: {Name:mk4363978f49074222681c98388afeb6b91ebe0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.597298   15495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.key ...
	I0827 21:38:34.597309   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/client.key: {Name:mk19c070d4aba6e4b70bf6a2886d225f545f0b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.597370   15495 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key.c63640c8
	I0827 21:38:34.597390   15495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt.c63640c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.186]
	I0827 21:38:34.877679   15495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt.c63640c8 ...
	I0827 21:38:34.877710   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt.c63640c8: {Name:mka0786a6d9f2617d3cbb6dc8a5e3f677efbf09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.877875   15495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key.c63640c8 ...
	I0827 21:38:34.877889   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key.c63640c8: {Name:mk53716b3aa0de0113ff04ae8d086fa36c86d159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.877956   15495 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt.c63640c8 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt
	I0827 21:38:34.878023   15495 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key.c63640c8 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key
	I0827 21:38:34.878067   15495 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.key
	I0827 21:38:34.878085   15495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.crt with IP's: []
	I0827 21:38:34.975492   15495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.crt ...
	I0827 21:38:34.975517   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.crt: {Name:mk96270790ff6b9ad80b10ea21d76bc07d7b0071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.975672   15495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.key ...
	I0827 21:38:34.975683   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.key: {Name:mk24e9a2dd04ddd6135b6ffb59b574df397c9ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:34.975836   15495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 21:38:34.975868   15495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 21:38:34.975894   15495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 21:38:34.975917   15495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 21:38:34.976488   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 21:38:35.002551   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 21:38:35.025391   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 21:38:35.047401   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 21:38:35.068932   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0827 21:38:35.089349   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 21:38:35.109796   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 21:38:35.132046   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/addons-709833/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 21:38:35.152533   15495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 21:38:35.173229   15495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 21:38:35.187436   15495 ssh_runner.go:195] Run: openssl version
	I0827 21:38:35.192420   15495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 21:38:35.201517   15495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 21:38:35.205410   15495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 21:38:35.205454   15495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 21:38:35.210650   15495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 21:38:35.219906   15495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 21:38:35.223521   15495 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 21:38:35.223577   15495 kubeadm.go:392] StartCluster: {Name:addons-709833 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-709833 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 21:38:35.223653   15495 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 21:38:35.223694   15495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 21:38:35.259425   15495 cri.go:89] found id: ""
	I0827 21:38:35.259493   15495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 21:38:35.268239   15495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 21:38:35.276420   15495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 21:38:35.284418   15495 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 21:38:35.284434   15495 kubeadm.go:157] found existing configuration files:
	
	I0827 21:38:35.284485   15495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 21:38:35.292001   15495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 21:38:35.292049   15495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 21:38:35.299861   15495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 21:38:35.307650   15495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 21:38:35.307694   15495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 21:38:35.316042   15495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 21:38:35.323987   15495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 21:38:35.324043   15495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 21:38:35.332393   15495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 21:38:35.340261   15495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 21:38:35.340310   15495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 21:38:35.348365   15495 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 21:38:35.399572   15495 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0827 21:38:35.399650   15495 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 21:38:35.489548   15495 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 21:38:35.489636   15495 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 21:38:35.489712   15495 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0827 21:38:35.499763   15495 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 21:38:35.502646   15495 out.go:235]   - Generating certificates and keys ...
	I0827 21:38:35.502735   15495 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 21:38:35.502813   15495 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 21:38:35.674262   15495 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 21:38:35.856493   15495 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 21:38:35.975606   15495 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 21:38:36.152950   15495 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 21:38:36.225865   15495 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 21:38:36.225979   15495 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-709833 localhost] and IPs [192.168.39.186 127.0.0.1 ::1]
	I0827 21:38:36.299228   15495 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 21:38:36.299358   15495 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-709833 localhost] and IPs [192.168.39.186 127.0.0.1 ::1]
	I0827 21:38:36.408140   15495 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 21:38:36.613225   15495 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 21:38:36.723176   15495 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 21:38:36.723273   15495 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 21:38:36.819308   15495 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 21:38:37.041099   15495 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0827 21:38:37.137695   15495 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 21:38:37.224302   15495 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 21:38:37.529403   15495 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 21:38:37.529911   15495 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 21:38:37.532340   15495 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 21:38:37.589747   15495 out.go:235]   - Booting up control plane ...
	I0827 21:38:37.589877   15495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 21:38:37.589956   15495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 21:38:37.590059   15495 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 21:38:37.590191   15495 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 21:38:37.590295   15495 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 21:38:37.590345   15495 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 21:38:37.685176   15495 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0827 21:38:37.685327   15495 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0827 21:38:38.191935   15495 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.527192ms
	I0827 21:38:38.192039   15495 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0827 21:38:43.193953   15495 kubeadm.go:310] [api-check] The API server is healthy after 5.001706231s
	I0827 21:38:43.206379   15495 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 21:38:43.224888   15495 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 21:38:43.252323   15495 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 21:38:43.252600   15495 kubeadm.go:310] [mark-control-plane] Marking the node addons-709833 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 21:38:43.265443   15495 kubeadm.go:310] [bootstrap-token] Using token: orfk67.junufwtbfns26wdu
	I0827 21:38:43.267064   15495 out.go:235]   - Configuring RBAC rules ...
	I0827 21:38:43.267207   15495 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 21:38:43.275173   15495 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 21:38:43.282207   15495 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 21:38:43.285186   15495 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 21:38:43.288075   15495 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 21:38:43.291058   15495 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 21:38:43.601552   15495 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 21:38:44.034573   15495 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 21:38:44.603211   15495 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 21:38:44.603232   15495 kubeadm.go:310] 
	I0827 21:38:44.603286   15495 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 21:38:44.603295   15495 kubeadm.go:310] 
	I0827 21:38:44.603421   15495 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 21:38:44.603444   15495 kubeadm.go:310] 
	I0827 21:38:44.603478   15495 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 21:38:44.603569   15495 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 21:38:44.603642   15495 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 21:38:44.603652   15495 kubeadm.go:310] 
	I0827 21:38:44.603719   15495 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 21:38:44.603729   15495 kubeadm.go:310] 
	I0827 21:38:44.603799   15495 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 21:38:44.603809   15495 kubeadm.go:310] 
	I0827 21:38:44.603875   15495 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 21:38:44.603945   15495 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 21:38:44.604023   15495 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 21:38:44.604039   15495 kubeadm.go:310] 
	I0827 21:38:44.604158   15495 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 21:38:44.604260   15495 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 21:38:44.604270   15495 kubeadm.go:310] 
	I0827 21:38:44.604386   15495 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token orfk67.junufwtbfns26wdu \
	I0827 21:38:44.604518   15495 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 \
	I0827 21:38:44.604545   15495 kubeadm.go:310] 	--control-plane 
	I0827 21:38:44.604556   15495 kubeadm.go:310] 
	I0827 21:38:44.604626   15495 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 21:38:44.604633   15495 kubeadm.go:310] 
	I0827 21:38:44.604703   15495 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token orfk67.junufwtbfns26wdu \
	I0827 21:38:44.604796   15495 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 
	I0827 21:38:44.605535   15495 kubeadm.go:310] W0827 21:38:35.351426     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 21:38:44.605829   15495 kubeadm.go:310] W0827 21:38:35.358217     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 21:38:44.605943   15495 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 21:38:44.605962   15495 cni.go:84] Creating CNI manager for ""
	I0827 21:38:44.605972   15495 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 21:38:44.607855   15495 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 21:38:44.609123   15495 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 21:38:44.618907   15495 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 21:38:44.635980   15495 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 21:38:44.636061   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:44.636094   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-709833 minikube.k8s.io/updated_at=2024_08_27T21_38_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=addons-709833 minikube.k8s.io/primary=true
	I0827 21:38:44.667519   15495 ops.go:34] apiserver oom_adj: -16
	I0827 21:38:44.754592   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:45.255304   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:45.755140   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:46.255226   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:46.754749   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:47.255232   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:47.754860   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:48.254637   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:48.754718   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:49.255622   15495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 21:38:49.409467   15495 kubeadm.go:1113] duration metric: took 4.773469563s to wait for elevateKubeSystemPrivileges
	I0827 21:38:49.409525   15495 kubeadm.go:394] duration metric: took 14.185954079s to StartCluster
	I0827 21:38:49.409549   15495 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:49.409710   15495 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 21:38:49.410274   15495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 21:38:49.410559   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0827 21:38:49.410564   15495 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 21:38:49.410669   15495 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0827 21:38:49.410763   15495 config.go:182] Loaded profile config "addons-709833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 21:38:49.410776   15495 addons.go:69] Setting helm-tiller=true in profile "addons-709833"
	I0827 21:38:49.410788   15495 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-709833"
	I0827 21:38:49.410767   15495 addons.go:69] Setting yakd=true in profile "addons-709833"
	I0827 21:38:49.410817   15495 addons.go:69] Setting ingress=true in profile "addons-709833"
	I0827 21:38:49.410832   15495 addons.go:234] Setting addon yakd=true in "addons-709833"
	I0827 21:38:49.410836   15495 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-709833"
	I0827 21:38:49.410845   15495 addons.go:69] Setting inspektor-gadget=true in profile "addons-709833"
	I0827 21:38:49.410864   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.410869   15495 addons.go:69] Setting default-storageclass=true in profile "addons-709833"
	I0827 21:38:49.410876   15495 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-709833"
	I0827 21:38:49.410894   15495 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-709833"
	I0827 21:38:49.410900   15495 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-709833"
	I0827 21:38:49.410927   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.410871   15495 addons.go:69] Setting metrics-server=true in profile "addons-709833"
	I0827 21:38:49.411040   15495 addons.go:234] Setting addon metrics-server=true in "addons-709833"
	I0827 21:38:49.411058   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.410834   15495 addons.go:234] Setting addon ingress=true in "addons-709833"
	I0827 21:38:49.411132   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.411298   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.411326   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.411401   15495 addons.go:69] Setting registry=true in profile "addons-709833"
	I0827 21:38:49.410811   15495 addons.go:234] Setting addon helm-tiller=true in "addons-709833"
	I0827 21:38:49.410783   15495 addons.go:69] Setting cloud-spanner=true in profile "addons-709833"
	I0827 21:38:49.411438   15495 addons.go:234] Setting addon registry=true in "addons-709833"
	I0827 21:38:49.410866   15495 addons.go:234] Setting addon inspektor-gadget=true in "addons-709833"
	I0827 21:38:49.411468   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.411480   15495 addons.go:69] Setting storage-provisioner=true in profile "addons-709833"
	I0827 21:38:49.411502   15495 addons.go:234] Setting addon storage-provisioner=true in "addons-709833"
	I0827 21:38:49.411524   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.411537   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.410768   15495 addons.go:69] Setting gcp-auth=true in profile "addons-709833"
	I0827 21:38:49.411573   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.411586   15495 mustload.go:65] Loading cluster: addons-709833
	I0827 21:38:49.410839   15495 addons.go:69] Setting ingress-dns=true in profile "addons-709833"
	I0827 21:38:49.411672   15495 addons.go:234] Setting addon ingress-dns=true in "addons-709833"
	I0827 21:38:49.411773   15495 config.go:182] Loaded profile config "addons-709833": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 21:38:49.411801   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.411823   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.411422   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.411444   15495 addons.go:234] Setting addon cloud-spanner=true in "addons-709833"
	I0827 21:38:49.411879   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.411885   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.411899   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.410864   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.411917   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.411974   15495 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-709833"
	I0827 21:38:49.411997   15495 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-709833"
	I0827 21:38:49.412005   15495 addons.go:69] Setting volcano=true in profile "addons-709833"
	I0827 21:38:49.412022   15495 addons.go:234] Setting addon volcano=true in "addons-709833"
	I0827 21:38:49.412030   15495 addons.go:69] Setting volumesnapshots=true in profile "addons-709833"
	I0827 21:38:49.412048   15495 addons.go:234] Setting addon volumesnapshots=true in "addons-709833"
	I0827 21:38:49.412050   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.411472   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.412076   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412215   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412220   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.412239   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412239   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.412397   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412420   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412492   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412505   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412582   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412222   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412613   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412614   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412622   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412628   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.412650   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412664   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.412767   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.412791   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.413100   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.413122   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.413155   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.413183   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.414162   15495 out.go:177] * Verifying Kubernetes components...
	I0827 21:38:49.414467   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.414493   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.416141   15495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 21:38:49.434724   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0827 21:38:49.435201   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.436120   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.436147   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.436542   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.437119   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0827 21:38:49.437192   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.437218   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.437598   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.437681   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0827 21:38:49.438121   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.438136   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.438202   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.439117   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.439142   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.439216   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.439856   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.439889   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.440093   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.440331   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.442314   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.442700   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.442721   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.451414   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0827 21:38:49.451906   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.452418   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.452452   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.452792   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0827 21:38:49.452863   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0827 21:38:49.453661   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.453739   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.454325   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.454348   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.454412   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.454435   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.454673   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0827 21:38:49.456738   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.456809   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.456873   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0827 21:38:49.457350   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.457382   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.460600   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.460893   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.460911   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.460979   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.461040   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0827 21:38:49.461140   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39363
	I0827 21:38:49.461985   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.462054   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.462211   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.462223   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.462664   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.462695   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.468775   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.468971   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.468992   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.469051   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.469164   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.469174   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.469378   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.469390   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.469834   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.469874   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.476639   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.476702   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.476750   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.477373   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.477415   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.477944   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.478008   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.484037   15495 addons.go:234] Setting addon default-storageclass=true in "addons-709833"
	I0827 21:38:49.484089   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.484509   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.484546   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.490814   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I0827 21:38:49.491051   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.491592   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.491687   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I0827 21:38:49.492195   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.492221   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.492544   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.492642   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.493185   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.493201   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.493412   15495 out.go:177]   - Using image docker.io/registry:2.8.3
	I0827 21:38:49.493525   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.494111   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.494159   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.496171   15495 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0827 21:38:49.496691   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.497535   15495 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0827 21:38:49.497573   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0827 21:38:49.497610   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.499050   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.500947   15495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0827 21:38:49.502149   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.502204   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46537
	I0827 21:38:49.502421   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.502452   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.502717   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0827 21:38:49.502759   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.502976   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.503140   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.503310   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.503339   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.503833   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.503857   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.504227   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.504432   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.504523   15495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 21:38:49.504985   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0827 21:38:49.505477   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.506147   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.506168   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.506245   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.506886   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.506903   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.506966   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.507085   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.507552   15495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 21:38:49.507770   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.507802   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.508264   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.508726   15495 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0827 21:38:49.508817   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.508864   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.509121   15495 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 21:38:49.509142   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0827 21:38:49.509169   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.509941   15495 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0827 21:38:49.509957   15495 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0827 21:38:49.509973   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.510375   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0827 21:38:49.511642   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0827 21:38:49.512529   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.513027   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.513050   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.513887   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.513934   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.513961   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.514447   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.514473   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.514693   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.515160   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.515217   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.515932   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.515983   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.516006   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.515983   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.516026   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.516017   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.516058   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.516193   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.516199   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.516376   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.516427   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.516480   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.516674   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.517149   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.518095   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
	I0827 21:38:49.518240   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0827 21:38:49.518685   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.519283   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.519299   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.519355   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.519723   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.519747   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:49.519758   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:49.519925   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.519979   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:49.519997   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:49.520006   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:49.520017   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:49.520025   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:49.520999   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.522364   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.522381   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.522444   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:49.522466   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:49.522474   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	W0827 21:38:49.522558   15495 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0827 21:38:49.523057   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.523673   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.524228   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.525065   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I0827 21:38:49.525528   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.525998   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0827 21:38:49.526033   15495 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 21:38:49.526041   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.526058   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.526687   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.526705   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.526894   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.526896   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0827 21:38:49.527256   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.527323   15495 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 21:38:49.527338   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 21:38:49.527354   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.528238   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.528260   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.528800   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.529557   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.529593   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.529846   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I0827 21:38:49.529856   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I0827 21:38:49.530012   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.530423   15495 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0827 21:38:49.530872   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.530875   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.530918   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.530878   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.531092   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.531457   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.531474   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.531574   15495 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0827 21:38:49.531588   15495 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0827 21:38:49.531607   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.531839   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.532061   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.532082   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.532173   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.532434   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.532661   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0827 21:38:49.533062   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.533091   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.533131   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.533150   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.533647   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.533666   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.533981   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.534110   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.534128   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.534370   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.534523   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.534799   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.534863   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.535140   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.535165   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.535252   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.536161   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.536186   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0827 21:38:49.537205   15495 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-709833"
	I0827 21:38:49.537244   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:49.537455   15495 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0827 21:38:49.537587   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.536656   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.537609   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.537963   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.537998   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.538018   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.538156   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.538201   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0827 21:38:49.538284   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.538756   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.538828   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.538853   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.538931   15495 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0827 21:38:49.538944   15495 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0827 21:38:49.538962   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.539805   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.539866   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.540997   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.541027   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.541112   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0827 21:38:49.542192   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0827 21:38:49.543779   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.544211   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.544246   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.544427   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.544605   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.544758   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.544878   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0827 21:38:49.544897   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.546943   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0827 21:38:49.548055   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0827 21:38:49.549117   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0827 21:38:49.550248   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0827 21:38:49.550272   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0827 21:38:49.550837   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.551507   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.551524   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.551538   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0827 21:38:49.551552   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0827 21:38:49.551573   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.551894   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.552087   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.552572   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0827 21:38:49.553116   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.553703   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.553719   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.554069   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.554173   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.554593   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.555760   15495 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0827 21:38:49.556581   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.556715   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.557066   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.557090   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.557138   15495 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 21:38:49.557158   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0827 21:38:49.557181   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.557342   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.557519   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.557664   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.557982   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.558228   15495 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0827 21:38:49.559413   15495 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0827 21:38:49.559427   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0827 21:38:49.559441   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.560749   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.561319   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.561353   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.561619   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.561810   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.561969   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.562088   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.564268   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.564770   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.564803   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.564980   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.565144   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.565333   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.565493   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.566626   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0827 21:38:49.567493   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.568007   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.568028   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.568401   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.568974   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:49.569016   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:49.570107   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0827 21:38:49.570269   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I0827 21:38:49.570495   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.570885   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.570900   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.571311   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.571557   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.571832   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.572481   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.572498   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.573048   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.573234   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.573714   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.574532   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0827 21:38:49.574930   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.575216   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.575278   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0827 21:38:49.575632   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.575661   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.575744   15495 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0827 21:38:49.575975   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.576145   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.576264   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.576627   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.576641   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.576794   15495 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0827 21:38:49.576834   15495 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 21:38:49.576848   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0827 21:38:49.576867   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.577957   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0827 21:38:49.578014   15495 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0827 21:38:49.578032   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.578896   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.579150   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.579313   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.579674   15495 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 21:38:49.579688   15495 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 21:38:49.579713   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.580688   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.581095   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.581127   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.581408   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.581596   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.581751   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.581900   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.582276   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.583149   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.583532   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.583557   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.583781   15495 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0827 21:38:49.583799   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.583838   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.584008   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.584271   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.584401   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.584489   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.584507   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.584685   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.584848   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.585037   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.585102   15495 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0827 21:38:49.585114   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0827 21:38:49.585129   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.585147   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.587922   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.588323   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.588361   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.588567   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.588724   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.588862   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.588966   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.589193   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	W0827 21:38:49.589568   15495 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35416->192.168.39.186:22: read: connection reset by peer
	I0827 21:38:49.589594   15495 retry.go:31] will retry after 351.268471ms: ssh: handshake failed: read tcp 192.168.39.1:35416->192.168.39.186:22: read: connection reset by peer
	I0827 21:38:49.605196   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:49.605791   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:49.605815   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:49.606175   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:49.606345   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:49.607868   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:49.609839   15495 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0827 21:38:49.611203   15495 out.go:177]   - Using image docker.io/busybox:stable
	I0827 21:38:49.612307   15495 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 21:38:49.612322   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0827 21:38:49.612338   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:49.615385   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.615815   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:49.615834   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:49.615985   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:49.616177   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:49.616349   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:49.616535   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:49.882899   15495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0827 21:38:49.882919   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0827 21:38:49.898136   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 21:38:49.913099   15495 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0827 21:38:49.913127   15495 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0827 21:38:49.942511   15495 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0827 21:38:49.942535   15495 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0827 21:38:49.947601   15495 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0827 21:38:49.947623   15495 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0827 21:38:49.980435   15495 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0827 21:38:49.980478   15495 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0827 21:38:49.981036   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 21:38:50.011142   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 21:38:50.028557   15495 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0827 21:38:50.028582   15495 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0827 21:38:50.043684   15495 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0827 21:38:50.043715   15495 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0827 21:38:50.083924   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0827 21:38:50.083955   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0827 21:38:50.097056   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 21:38:50.097056   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 21:38:50.103256   15495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 21:38:50.103295   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0827 21:38:50.129711   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 21:38:50.145437   15495 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0827 21:38:50.145458   15495 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0827 21:38:50.150038   15495 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0827 21:38:50.150058   15495 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0827 21:38:50.179149   15495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0827 21:38:50.179180   15495 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0827 21:38:50.215132   15495 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0827 21:38:50.215163   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0827 21:38:50.229536   15495 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0827 21:38:50.229566   15495 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0827 21:38:50.232634   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0827 21:38:50.290502   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0827 21:38:50.290534   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0827 21:38:50.342346   15495 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0827 21:38:50.342370   15495 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0827 21:38:50.364655   15495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 21:38:50.364682   15495 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0827 21:38:50.377845   15495 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0827 21:38:50.377871   15495 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0827 21:38:50.388858   15495 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0827 21:38:50.388880   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0827 21:38:50.423870   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0827 21:38:50.424695   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0827 21:38:50.577800   15495 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0827 21:38:50.577825   15495 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0827 21:38:50.580357   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0827 21:38:50.580378   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0827 21:38:50.595782   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 21:38:50.628369   15495 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0827 21:38:50.628400   15495 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0827 21:38:50.631869   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0827 21:38:50.746214   15495 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0827 21:38:50.746244   15495 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0827 21:38:50.756243   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0827 21:38:50.756265   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0827 21:38:50.787693   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0827 21:38:50.787722   15495 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0827 21:38:50.879396   15495 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0827 21:38:50.879423   15495 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0827 21:38:50.953331   15495 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0827 21:38:50.953366   15495 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0827 21:38:51.076686   15495 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 21:38:51.076713   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0827 21:38:51.159210   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0827 21:38:51.159232   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0827 21:38:51.174211   15495 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 21:38:51.174231   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0827 21:38:51.416669   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 21:38:51.432302   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0827 21:38:51.432330   15495 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0827 21:38:51.451986   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 21:38:51.752187   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0827 21:38:51.752213   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0827 21:38:51.936974   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0827 21:38:51.936994   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0827 21:38:52.061476   15495 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 21:38:52.061502   15495 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0827 21:38:52.220513   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 21:38:52.837995   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.939818397s)
	I0827 21:38:52.838067   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:52.838085   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:52.838401   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:52.838421   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:52.838430   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:52.838437   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:52.838840   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:52.838882   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:52.838895   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.528125   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.547056237s)
	I0827 21:38:54.528173   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.517000785s)
	I0827 21:38:54.528175   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528208   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528229   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.431104806s)
	I0827 21:38:54.528236   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.431157521s)
	I0827 21:38:54.528249   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.528258   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528272   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.528291   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528307   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.528336   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.528344   15495 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.425063747s)
	I0827 21:38:54.528411   15495 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.425099873s)
	I0827 21:38:54.528434   15495 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0827 21:38:54.528516   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.528537   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.528547   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528551   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.528556   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.528563   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.528571   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.528578   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.529397   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.529409   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.529419   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.529417   15495 node_ready.go:35] waiting up to 6m0s for node "addons-709833" to be "Ready" ...
	I0827 21:38:54.529441   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.529452   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.529460   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.529423   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.529427   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.529695   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.529710   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.529851   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.529787   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.529989   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.529795   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.530000   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.530009   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.530091   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.530101   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.530110   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.530828   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.530841   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:54.569596   15495 node_ready.go:49] node "addons-709833" has status "Ready":"True"
	I0827 21:38:54.569624   15495 node_ready.go:38] duration metric: took 40.188709ms for node "addons-709833" to be "Ready" ...
	I0827 21:38:54.569635   15495 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 21:38:54.627585   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.627610   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.627944   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.628023   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.628035   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	W0827 21:38:54.628242   15495 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0827 21:38:54.639669   15495 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2kzjx" in "kube-system" namespace to be "Ready" ...
	I0827 21:38:54.651435   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:54.651456   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:54.651851   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:54.651864   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:54.651872   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:55.053333   15495 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-709833" context rescaled to 1 replicas
	I0827 21:38:56.612924   15495 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0827 21:38:56.613003   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:56.616395   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:56.616822   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:56.616854   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:56.616990   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:56.617197   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:56.617418   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:56.617588   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:56.735038   15495 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0827 21:38:56.767800   15495 pod_ready.go:98] pod "coredns-6f6b679f8f-2kzjx" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:56 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.186 HostIPs:[{IP:192.168.39.186}] PodIP:10.244.0.2
PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-27 21:38:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-08-27 21:38:54 +0000 UTC,FinishedAt:2024-08-27 21:38:55 +0000 UTC,ContainerID:cri-o://8c206e8fcc91e66a2ed69f473cee195188a13b746154ad737b2dc617f30e053f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://8c206e8fcc91e66a2ed69f473cee195188a13b746154ad737b2dc617f30e053f Started:0xc00196d66c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000852000} {Name:kube-api-access-2rz5c MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000852020}] User:nil AllocatedResourcesStatus:[
]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0827 21:38:56.767835   15495 pod_ready.go:82] duration metric: took 2.128140717s for pod "coredns-6f6b679f8f-2kzjx" in "kube-system" namespace to be "Ready" ...
	E0827 21:38:56.767850   15495 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-2kzjx" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:56 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 21:38:49 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.186 HostIPs:[{IP:192.1
68.39.186}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-27 21:38:49 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-08-27 21:38:54 +0000 UTC,FinishedAt:2024-08-27 21:38:55 +0000 UTC,ContainerID:cri-o://8c206e8fcc91e66a2ed69f473cee195188a13b746154ad737b2dc617f30e053f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://8c206e8fcc91e66a2ed69f473cee195188a13b746154ad737b2dc617f30e053f Started:0xc00196d66c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000852000} {Name:kube-api-access-2rz5c MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000852020}] User:n
il AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0827 21:38:56.767863   15495 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-tc4fc" in "kube-system" namespace to be "Ready" ...
	I0827 21:38:56.774653   15495 addons.go:234] Setting addon gcp-auth=true in "addons-709833"
	I0827 21:38:56.774707   15495 host.go:66] Checking if "addons-709833" exists ...
	I0827 21:38:56.774992   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:56.775017   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:56.790092   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I0827 21:38:56.790506   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:56.791015   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:56.791041   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:56.791419   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:56.791872   15495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 21:38:56.791900   15495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 21:38:56.807598   15495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0827 21:38:56.808065   15495 main.go:141] libmachine: () Calling .GetVersion
	I0827 21:38:56.808606   15495 main.go:141] libmachine: Using API Version  1
	I0827 21:38:56.808631   15495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 21:38:56.808928   15495 main.go:141] libmachine: () Calling .GetMachineName
	I0827 21:38:56.809104   15495 main.go:141] libmachine: (addons-709833) Calling .GetState
	I0827 21:38:56.810712   15495 main.go:141] libmachine: (addons-709833) Calling .DriverName
	I0827 21:38:56.810949   15495 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0827 21:38:56.810978   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHHostname
	I0827 21:38:56.813493   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:56.813934   15495 main.go:141] libmachine: (addons-709833) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:dd:69", ip: ""} in network mk-addons-709833: {Iface:virbr1 ExpiryTime:2024-08-27 22:38:19 +0000 UTC Type:0 Mac:52:54:00:be:dd:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:addons-709833 Clientid:01:52:54:00:be:dd:69}
	I0827 21:38:56.813967   15495 main.go:141] libmachine: (addons-709833) DBG | domain addons-709833 has defined IP address 192.168.39.186 and MAC address 52:54:00:be:dd:69 in network mk-addons-709833
	I0827 21:38:56.814085   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHPort
	I0827 21:38:56.814251   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHKeyPath
	I0827 21:38:56.814408   15495 main.go:141] libmachine: (addons-709833) Calling .GetSSHUsername
	I0827 21:38:56.814591   15495 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/addons-709833/id_rsa Username:docker}
	I0827 21:38:57.659675   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.529920299s)
	I0827 21:38:57.659732   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.659735   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.427066987s)
	I0827 21:38:57.659769   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.659770   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.235865152s)
	I0827 21:38:57.659775   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.659791   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.659799   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.659814   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.659872   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.235154042s)
	I0827 21:38:57.659945   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.064125443s)
	I0827 21:38:57.659996   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660044   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.02814661s)
	I0827 21:38:57.660090   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660113   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660112   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.659948   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660124   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.660131   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660133   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660136   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660142   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660249   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.243542735s)
	W0827 21:38:57.660279   15495 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 21:38:57.660297   15495 retry.go:31] will retry after 272.329827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 21:38:57.660328   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.208303858s)
	I0827 21:38:57.660353   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660363   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660376   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.660382   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.660389   15495 addons.go:475] Verifying addon ingress=true in "addons-709833"
	I0827 21:38:57.660047   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.660413   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.660422   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660429   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660354   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660511   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660638   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660738   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660780   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.660787   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.660796   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.660804   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.660843   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.660861   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.660868   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.660875   15495 addons.go:475] Verifying addon registry=true in "addons-709833"
	I0827 21:38:57.661584   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.661613   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.661620   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.661627   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.661634   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.661713   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.661730   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.661737   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.661803   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.662355   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.662426   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.662453   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.662598   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.662749   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.662751   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.662759   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.662767   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.662435   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.662833   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.662453   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.662843   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.662851   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.663128   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.663165   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.663172   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.663427   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.663476   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.663568   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.663586   15495 addons.go:475] Verifying addon metrics-server=true in "addons-709833"
	I0827 21:38:57.663730   15495 out.go:177] * Verifying registry addon...
	I0827 21:38:57.663756   15495 out.go:177] * Verifying ingress addon...
	I0827 21:38:57.663807   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.664233   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.664249   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:57.664330   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:57.664692   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:57.664724   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:57.664730   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:57.664746   15495 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-709833 service yakd-dashboard -n yakd-dashboard
	
	I0827 21:38:57.666642   15495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0827 21:38:57.667094   15495 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0827 21:38:57.676968   15495 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0827 21:38:57.676985   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:38:57.684755   15495 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0827 21:38:57.684771   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:38:57.933785   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 21:38:58.191808   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:38:58.193421   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:38:58.369951   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.149378023s)
	I0827 21:38:58.369963   15495 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.558991021s)
	I0827 21:38:58.370008   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:58.370186   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:58.370445   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:58.370459   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:58.370469   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:58.370476   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:58.371019   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:58.371040   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:58.371058   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:58.371074   15495 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-709833"
	I0827 21:38:58.371660   15495 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0827 21:38:58.372533   15495 out.go:177] * Verifying csi-hostpath-driver addon...
	I0827 21:38:58.374199   15495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 21:38:58.374859   15495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0827 21:38:58.375416   15495 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0827 21:38:58.375434   15495 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0827 21:38:58.405358   15495 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0827 21:38:58.405386   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:38:58.504719   15495 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0827 21:38:58.504742   15495 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0827 21:38:58.588388   15495 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 21:38:58.588411   15495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0827 21:38:58.644321   15495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 21:38:58.672235   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:38:58.672503   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:38:58.772958   15495 pod_ready.go:103] pod "coredns-6f6b679f8f-tc4fc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:38:58.880170   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:38:59.173415   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:38:59.174086   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:38:59.476779   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:38:59.672137   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:38:59.672983   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:38:59.844739   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.910901903s)
	I0827 21:38:59.844791   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:59.844809   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:59.845175   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:59.845227   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:59.845241   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:38:59.845250   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:38:59.845263   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:59.845539   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:38:59.845581   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:38:59.845589   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:38:59.903954   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:00.068219   15495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.423857671s)
	I0827 21:39:00.068289   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:39:00.068309   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:39:00.068607   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:39:00.068635   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:39:00.068644   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:39:00.068673   15495 main.go:141] libmachine: Making call to close driver server
	I0827 21:39:00.068685   15495 main.go:141] libmachine: (addons-709833) Calling .Close
	I0827 21:39:00.068890   15495 main.go:141] libmachine: Successfully made call to close driver server
	I0827 21:39:00.068899   15495 main.go:141] libmachine: (addons-709833) DBG | Closing plugin on server side
	I0827 21:39:00.068905   15495 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 21:39:00.070582   15495 addons.go:475] Verifying addon gcp-auth=true in "addons-709833"
	I0827 21:39:00.072074   15495 out.go:177] * Verifying gcp-auth addon...
	I0827 21:39:00.073656   15495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0827 21:39:00.136303   15495 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0827 21:39:00.136340   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:00.206931   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:00.208210   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:00.277449   15495 pod_ready.go:93] pod "coredns-6f6b679f8f-tc4fc" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.277475   15495 pod_ready.go:82] duration metric: took 3.509600805s for pod "coredns-6f6b679f8f-tc4fc" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.277487   15495 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.285541   15495 pod_ready.go:93] pod "etcd-addons-709833" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.285562   15495 pod_ready.go:82] duration metric: took 8.068413ms for pod "etcd-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.285573   15495 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.290664   15495 pod_ready.go:93] pod "kube-apiserver-addons-709833" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.290680   15495 pod_ready.go:82] duration metric: took 5.099529ms for pod "kube-apiserver-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.290689   15495 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.296791   15495 pod_ready.go:93] pod "kube-controller-manager-addons-709833" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.296818   15495 pod_ready.go:82] duration metric: took 6.121374ms for pod "kube-controller-manager-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.296835   15495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-75s27" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.304524   15495 pod_ready.go:93] pod "kube-proxy-75s27" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.304550   15495 pod_ready.go:82] duration metric: took 7.701272ms for pod "kube-proxy-75s27" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.304563   15495 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.380185   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:00.578750   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:00.673585   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:00.674301   15495 pod_ready.go:93] pod "kube-scheduler-addons-709833" in "kube-system" namespace has status "Ready":"True"
	I0827 21:39:00.674318   15495 pod_ready.go:82] duration metric: took 369.74689ms for pod "kube-scheduler-addons-709833" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.674326   15495 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace to be "Ready" ...
	I0827 21:39:00.674418   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:00.879019   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:01.077468   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:01.179453   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:01.179652   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:01.380239   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:01.577331   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:01.670590   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:01.670820   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:01.879691   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:02.077337   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:02.170102   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:02.171901   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:02.380849   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:02.578051   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:02.671193   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:02.671471   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:02.679443   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:02.881589   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:03.076962   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:03.171955   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:03.171975   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:03.379236   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:03.579181   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:03.671510   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:03.671745   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:03.879661   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:04.077473   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:04.179168   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:04.179219   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:04.380819   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:04.577640   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:04.671083   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:04.671487   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:04.679766   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:04.880128   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:05.077737   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:05.170598   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:05.171839   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:05.380416   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:05.577012   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:05.670239   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:05.671072   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:05.880058   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:06.077551   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:06.171329   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:06.171608   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:06.380109   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:06.577277   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:06.670768   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:06.670931   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:06.680034   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:06.879425   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:07.076872   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:07.170940   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:07.171395   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:07.379125   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:07.579595   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:07.672168   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:07.672517   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:07.881117   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:08.076843   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:08.171728   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:08.172723   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:08.380182   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:08.577353   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:08.670978   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:08.672914   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:08.881395   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:09.077925   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:09.171932   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:09.172888   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:09.179692   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:09.379627   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:09.577430   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:09.671264   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:09.671809   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:09.883676   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:10.076927   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:10.169985   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:10.172497   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:10.379093   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:10.577519   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:10.670837   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:10.671525   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:10.881055   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:11.077201   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:11.171151   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:11.172376   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:11.182228   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:11.379510   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:11.581966   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:11.671942   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:11.671974   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:11.879472   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:12.076774   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:12.172364   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:12.172583   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:12.379886   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:12.578806   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:12.705944   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:12.706510   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:12.880492   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:13.077246   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:13.171787   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:13.172232   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:13.380156   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:13.916732   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:13.916732   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:13.916784   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:13.917237   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:14.032416   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:14.076804   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:14.171410   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:14.171900   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:14.380330   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:14.578254   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:14.679822   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:14.680713   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:14.880740   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:15.077870   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:15.172387   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:15.172619   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:15.379821   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:15.578583   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:15.671994   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:15.672119   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:15.880801   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:16.077675   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:16.171237   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:16.171413   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:16.179510   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:16.379774   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:16.577746   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:16.672078   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:16.672221   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:16.878667   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:17.079168   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:17.170195   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:17.173983   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:17.379597   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:17.576575   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:17.671425   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:17.672063   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:17.879854   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:18.077970   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:18.170489   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:18.172005   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:18.180805   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:18.381238   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:18.577789   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:18.672229   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:18.672280   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:18.879824   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:19.077737   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:19.172058   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:19.172339   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:19.379598   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:19.580038   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:19.670113   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:19.671791   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:19.879942   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:20.220241   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:20.220573   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:20.221218   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:20.221654   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:20.379486   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:20.577016   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:20.671248   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:20.671525   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:20.878788   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:21.077149   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:21.170798   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:21.171528   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:21.378925   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:21.577545   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:21.674456   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:21.674589   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:21.879506   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:22.077916   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:22.170220   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:22.171628   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:22.380375   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:22.576583   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:22.672060   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:22.672261   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:22.679706   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:22.879313   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:23.077734   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:23.171038   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:23.171337   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:23.378931   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:23.577222   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:23.670705   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:23.671085   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:23.879328   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:24.078010   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:24.170342   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:24.174352   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:24.380725   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:24.577892   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:24.670245   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:24.671683   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:24.679977   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:24.879304   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:25.078873   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:25.180238   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:25.180479   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:25.380429   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:25.576763   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:25.671141   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:25.671626   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:25.879920   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:26.078481   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:26.171608   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:26.173253   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:26.379927   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:26.576885   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:26.670924   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:26.672192   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:26.880018   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:27.077194   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:27.171349   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:27.171870   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:27.181169   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:27.379430   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:27.577114   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:27.678727   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:27.678737   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:27.879499   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:28.078049   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:28.170075   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:28.172052   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:28.381240   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:28.577483   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:28.670972   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:28.672048   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:28.879916   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:29.077384   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:29.171645   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:29.171788   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:29.181975   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:29.379841   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:29.577858   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:29.672168   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:29.672773   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:29.878976   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:30.076921   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:30.171413   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:30.172280   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:30.379598   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:30.577378   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:30.670859   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:30.670919   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:30.878881   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:31.077377   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:31.171781   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:31.173209   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:31.378851   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:31.580478   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:31.671383   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:31.671749   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:31.682415   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:31.879033   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:32.077847   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:32.179167   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:32.179943   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:32.379248   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:32.577918   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:32.672228   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:32.673046   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:32.879092   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:33.079059   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:33.180771   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:33.183150   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:33.379355   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:33.578007   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:33.670000   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:33.671664   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:33.880062   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:34.076817   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:34.171099   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:34.171235   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:34.179383   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:34.380959   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:34.580289   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:34.671548   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:34.671666   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:34.880059   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:35.077860   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:35.169490   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:35.171248   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:35.379373   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:35.576671   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:35.670752   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:35.671291   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:35.879141   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:36.077579   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:36.171105   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:36.171232   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:36.379965   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:36.576989   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:36.670178   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:36.671479   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:36.679545   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:36.879734   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:37.077055   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:37.171327   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:37.171638   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:37.379335   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:37.578070   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:37.670455   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:37.672267   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:37.879414   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:38.077674   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:38.173016   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:38.174618   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:38.380133   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:38.577492   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:38.671738   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:38.671893   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:38.680693   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:38.879738   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:39.077970   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:39.171389   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:39.172293   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:39.380359   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:39.578471   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:39.671016   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:39.671095   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:39.880487   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:40.078428   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:40.181089   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:40.181474   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:40.378790   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:40.577251   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:40.670538   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:40.670791   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:40.879368   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:41.076788   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:41.171031   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:41.171838   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:41.180181   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:41.379159   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:41.577753   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:41.671186   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:41.671206   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:41.879748   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:42.077496   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:42.171701   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:42.172053   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:42.773868   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:42.774247   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:42.774578   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:42.774911   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:42.878977   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:43.077601   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:43.178290   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:43.179077   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:43.183235   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:43.380826   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:43.577767   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:43.671428   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:43.672072   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:43.880955   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:44.077462   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:44.170825   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:44.171349   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:44.380350   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:44.577264   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:44.674599   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:44.674899   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:44.879445   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:45.077409   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:45.171507   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:45.171568   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:45.380848   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:45.577789   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:45.671705   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:45.673572   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:45.680333   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:45.879616   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:46.077587   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:46.171683   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:46.172320   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:46.379803   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:46.577882   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:46.671711   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:46.673736   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:46.879443   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:47.076607   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:47.171462   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:47.171817   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:47.379325   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:47.577582   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:47.672092   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:47.672419   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:47.894517   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:48.077536   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:48.171804   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:48.172087   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:48.179352   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:48.379409   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:48.809379   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:48.809660   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:48.809827   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:48.879508   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:49.077118   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:49.170690   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:49.170956   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:49.381562   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:49.578897   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:49.671492   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:49.672434   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:49.879091   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:50.077039   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:50.170377   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 21:39:50.171762   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:50.179527   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:50.380211   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:50.577633   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:50.670761   15495 kapi.go:107] duration metric: took 53.004118259s to wait for kubernetes.io/minikube-addons=registry ...
	I0827 21:39:50.671668   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:50.879792   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:51.077417   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:51.172838   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:51.395674   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:51.578702   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:51.671144   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:51.880057   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:52.077419   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:52.171163   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:52.180044   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:52.379851   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:52.577805   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:52.671189   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:53.211965   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:53.342210   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:53.343735   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:53.443032   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:53.594460   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:53.694527   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:53.881266   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:54.078541   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:54.171329   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:54.180325   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:54.379082   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:54.577780   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:54.671724   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:54.879804   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:55.078213   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:55.172621   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:55.379458   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:55.584961   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:55.672020   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:55.880250   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:56.078603   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:56.174258   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:56.182782   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:56.379665   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:56.577772   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:56.673983   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:56.998216   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:57.097072   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:57.197804   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:57.380345   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:57.576715   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:57.672308   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:57.879027   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:58.077804   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:58.171405   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:58.379763   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:58.577127   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:58.672109   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:58.683231   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:39:58.880372   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:59.077702   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:59.171137   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:59.381784   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:39:59.586107   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:39:59.675086   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:39:59.884855   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:00.077925   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:00.171107   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:00.378642   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:00.576933   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:00.671230   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:00.878904   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:01.077267   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:01.171857   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:01.180672   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:01.379603   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:01.577295   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:01.672112   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:01.886822   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:02.076953   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:02.171549   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:02.379499   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:02.577197   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:02.671575   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:02.880079   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:03.081859   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:03.186280   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:03.194811   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:03.379474   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:03.576950   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:03.672155   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:03.881241   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:04.080370   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:04.172646   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:04.380541   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:04.577773   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:05.073116   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:05.073518   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:05.079091   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:05.171887   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:05.380015   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:05.577496   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:05.671796   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:05.683473   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:05.880354   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:06.079052   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:06.172890   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:06.379832   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:06.577938   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:06.672023   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:06.880242   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:07.077662   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:07.171497   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:07.390590   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:07.962651   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:07.965999   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:07.969046   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:07.981924   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:08.078065   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:08.171618   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:08.380242   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:08.577560   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:08.671541   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:08.879756   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:09.077944   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:09.171456   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:09.384477   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:09.579877   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:09.682970   15495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 21:40:09.880769   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:10.079201   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:10.179658   15495 kapi.go:107] duration metric: took 1m12.512559855s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0827 21:40:10.191515   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:10.379313   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:10.578099   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:10.881245   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:11.077772   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:11.381086   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:11.577764   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:11.879524   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:12.076791   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:12.379161   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:12.577635   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:12.680179   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:12.879209   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:13.077335   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:13.379912   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:13.577661   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:13.880583   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:14.364286   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:14.379927   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:14.578507   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:14.680336   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:14.880619   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:15.080150   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 21:40:15.379143   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:15.577970   15495 kapi.go:107] duration metric: took 1m15.504311997s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0827 21:40:15.579719   15495 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-709833 cluster.
	I0827 21:40:15.581137   15495 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0827 21:40:15.582426   15495 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0827 21:40:15.879725   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:16.379490   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:16.687639   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:16.879815   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:17.379123   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:17.880283   15495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 21:40:18.381121   15495 kapi.go:107] duration metric: took 1m20.006261263s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0827 21:40:18.382672   15495 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0827 21:40:18.383989   15495 addons.go:510] duration metric: took 1m28.9733306s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin default-storageclass cloud-spanner inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0827 21:40:19.180494   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:21.180612   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:23.181343   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:25.681361   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:27.681922   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:30.180075   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:32.180356   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:34.180910   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:36.680922   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:39.181022   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:41.681547   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:44.181216   15495 pod_ready.go:103] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"False"
	I0827 21:40:44.682762   15495 pod_ready.go:93] pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace has status "Ready":"True"
	I0827 21:40:44.682784   15495 pod_ready.go:82] duration metric: took 1m44.008451898s for pod "metrics-server-8988944d9-k9hsc" in "kube-system" namespace to be "Ready" ...
	I0827 21:40:44.682795   15495 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-t95pq" in "kube-system" namespace to be "Ready" ...
	I0827 21:40:44.689036   15495 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-t95pq" in "kube-system" namespace has status "Ready":"True"
	I0827 21:40:44.689055   15495 pod_ready.go:82] duration metric: took 6.253934ms for pod "nvidia-device-plugin-daemonset-t95pq" in "kube-system" namespace to be "Ready" ...
	I0827 21:40:44.689069   15495 pod_ready.go:39] duration metric: took 1m50.119421922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 21:40:44.689086   15495 api_server.go:52] waiting for apiserver process to appear ...
	I0827 21:40:44.689111   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0827 21:40:44.689155   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 21:40:44.759101   15495 cri.go:89] found id: "2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:40:44.759123   15495 cri.go:89] found id: ""
	I0827 21:40:44.759131   15495 logs.go:276] 1 containers: [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676]
	I0827 21:40:44.759178   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.767437   15495 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0827 21:40:44.767509   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 21:40:44.804264   15495 cri.go:89] found id: "bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:40:44.804283   15495 cri.go:89] found id: ""
	I0827 21:40:44.804290   15495 logs.go:276] 1 containers: [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198]
	I0827 21:40:44.804345   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.808424   15495 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0827 21:40:44.808497   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 21:40:44.846726   15495 cri.go:89] found id: "ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:40:44.846751   15495 cri.go:89] found id: ""
	I0827 21:40:44.846759   15495 logs.go:276] 1 containers: [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350]
	I0827 21:40:44.846813   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.850800   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0827 21:40:44.850858   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 21:40:44.891535   15495 cri.go:89] found id: "09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:40:44.891562   15495 cri.go:89] found id: ""
	I0827 21:40:44.891573   15495 logs.go:276] 1 containers: [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060]
	I0827 21:40:44.891639   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.895426   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0827 21:40:44.895493   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 21:40:44.937771   15495 cri.go:89] found id: "fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:40:44.937795   15495 cri.go:89] found id: ""
	I0827 21:40:44.937808   15495 logs.go:276] 1 containers: [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de]
	I0827 21:40:44.937861   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.942257   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 21:40:44.942323   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 21:40:44.992177   15495 cri.go:89] found id: "d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:40:44.992195   15495 cri.go:89] found id: ""
	I0827 21:40:44.992203   15495 logs.go:276] 1 containers: [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3]
	I0827 21:40:44.992254   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:44.996447   15495 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0827 21:40:44.996521   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 21:40:45.033868   15495 cri.go:89] found id: ""
	I0827 21:40:45.033890   15495 logs.go:276] 0 containers: []
	W0827 21:40:45.033897   15495 logs.go:278] No container was found matching "kindnet"
	I0827 21:40:45.033905   15495 logs.go:123] Gathering logs for container status ...
	I0827 21:40:45.033916   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 21:40:45.083757   15495 logs.go:123] Gathering logs for kube-scheduler [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060] ...
	I0827 21:40:45.083788   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:40:45.127612   15495 logs.go:123] Gathering logs for kube-proxy [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de] ...
	I0827 21:40:45.127653   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:40:45.176583   15495 logs.go:123] Gathering logs for describe nodes ...
	I0827 21:40:45.176621   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 21:40:45.316014   15495 logs.go:123] Gathering logs for kube-apiserver [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676] ...
	I0827 21:40:45.316051   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:40:45.364423   15495 logs.go:123] Gathering logs for etcd [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198] ...
	I0827 21:40:45.364459   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:40:45.423821   15495 logs.go:123] Gathering logs for coredns [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350] ...
	I0827 21:40:45.423865   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:40:45.463800   15495 logs.go:123] Gathering logs for kube-controller-manager [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3] ...
	I0827 21:40:45.463823   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:40:45.527749   15495 logs.go:123] Gathering logs for CRI-O ...
	I0827 21:40:45.527777   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0827 21:40:46.568542   15495 logs.go:123] Gathering logs for kubelet ...
	I0827 21:40:46.568586   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0827 21:40:46.624152   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:49 addons-709833 kubelet[1210]: W0827 21:38:49.339788    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-709833' and this object
	W0827 21:40:46.624430   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:46.635179   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	W0827 21:40:46.635499   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:46.649101   15495 logs.go:138] Found kubelet problem: Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	W0827 21:40:46.649386   15495 logs.go:138] Found kubelet problem: Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	I0827 21:40:46.675082   15495 logs.go:123] Gathering logs for dmesg ...
	I0827 21:40:46.675116   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 21:40:46.693203   15495 out.go:358] Setting ErrFile to fd 2...
	I0827 21:40:46.693226   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0827 21:40:46.693277   15495 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0827 21:40:46.693286   15495 out.go:270]   Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:46.693294   15495 out.go:270]   Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	  Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	W0827 21:40:46.693303   15495 out.go:270]   Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:46.693310   15495 out.go:270]   Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	  Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	W0827 21:40:46.693316   15495 out.go:270]   Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	I0827 21:40:46.693321   15495 out.go:358] Setting ErrFile to fd 2...
	I0827 21:40:46.693326   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:40:56.694390   15495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 21:40:56.716168   15495 api_server.go:72] duration metric: took 2m7.305568614s to wait for apiserver process to appear ...
	I0827 21:40:56.716198   15495 api_server.go:88] waiting for apiserver healthz status ...
	I0827 21:40:56.716240   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0827 21:40:56.716291   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 21:40:56.760737   15495 cri.go:89] found id: "2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:40:56.760760   15495 cri.go:89] found id: ""
	I0827 21:40:56.760768   15495 logs.go:276] 1 containers: [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676]
	I0827 21:40:56.760816   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.764987   15495 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0827 21:40:56.765057   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 21:40:56.801535   15495 cri.go:89] found id: "bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:40:56.801561   15495 cri.go:89] found id: ""
	I0827 21:40:56.801578   15495 logs.go:276] 1 containers: [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198]
	I0827 21:40:56.801641   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.806294   15495 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0827 21:40:56.806370   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 21:40:56.842331   15495 cri.go:89] found id: "ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:40:56.842353   15495 cri.go:89] found id: ""
	I0827 21:40:56.842365   15495 logs.go:276] 1 containers: [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350]
	I0827 21:40:56.842421   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.846284   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0827 21:40:56.846350   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 21:40:56.883188   15495 cri.go:89] found id: "09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:40:56.883213   15495 cri.go:89] found id: ""
	I0827 21:40:56.883223   15495 logs.go:276] 1 containers: [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060]
	I0827 21:40:56.883274   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.887818   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0827 21:40:56.887892   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 21:40:56.926670   15495 cri.go:89] found id: "fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:40:56.926688   15495 cri.go:89] found id: ""
	I0827 21:40:56.926696   15495 logs.go:276] 1 containers: [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de]
	I0827 21:40:56.926740   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.931134   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 21:40:56.931200   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 21:40:56.975723   15495 cri.go:89] found id: "d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:40:56.975745   15495 cri.go:89] found id: ""
	I0827 21:40:56.975753   15495 logs.go:276] 1 containers: [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3]
	I0827 21:40:56.975805   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:40:56.979859   15495 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0827 21:40:56.979913   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 21:40:57.014638   15495 cri.go:89] found id: ""
	I0827 21:40:57.014662   15495 logs.go:276] 0 containers: []
	W0827 21:40:57.014670   15495 logs.go:278] No container was found matching "kindnet"
	I0827 21:40:57.014678   15495 logs.go:123] Gathering logs for dmesg ...
	I0827 21:40:57.014691   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 21:40:57.030185   15495 logs.go:123] Gathering logs for CRI-O ...
	I0827 21:40:57.030214   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0827 21:40:57.946086   15495 logs.go:123] Gathering logs for container status ...
	I0827 21:40:57.946123   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 21:40:58.005420   15495 logs.go:123] Gathering logs for kubelet ...
	I0827 21:40:58.005450   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0827 21:40:58.051153   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:49 addons-709833 kubelet[1210]: W0827 21:38:49.339788    1210 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-709833' and this object
	W0827 21:40:58.051333   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:58.057394   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	W0827 21:40:58.057559   15495 logs.go:138] Found kubelet problem: Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:58.065449   15495 logs.go:138] Found kubelet problem: Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	W0827 21:40:58.065607   15495 logs.go:138] Found kubelet problem: Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	I0827 21:40:58.091802   15495 logs.go:123] Gathering logs for describe nodes ...
	I0827 21:40:58.091835   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 21:40:58.201497   15495 logs.go:123] Gathering logs for kube-apiserver [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676] ...
	I0827 21:40:58.201527   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:40:58.248642   15495 logs.go:123] Gathering logs for etcd [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198] ...
	I0827 21:40:58.248676   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:40:58.317897   15495 logs.go:123] Gathering logs for coredns [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350] ...
	I0827 21:40:58.317934   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:40:58.354439   15495 logs.go:123] Gathering logs for kube-scheduler [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060] ...
	I0827 21:40:58.354468   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:40:58.410139   15495 logs.go:123] Gathering logs for kube-proxy [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de] ...
	I0827 21:40:58.410172   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:40:58.452572   15495 logs.go:123] Gathering logs for kube-controller-manager [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3] ...
	I0827 21:40:58.452607   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:40:58.508361   15495 out.go:358] Setting ErrFile to fd 2...
	I0827 21:40:58.508396   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0827 21:40:58.508456   15495 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0827 21:40:58.508474   15495 out.go:270]   Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:38:49 addons-709833 kubelet[1210]: E0827 21:38:49.339843    1210 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:58.508489   15495 out.go:270]   Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	  Aug 27 21:38:55 addons-709833 kubelet[1210]: W0827 21:38:55.232406    1210 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-709833" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-709833' and this object
	W0827 21:40:58.508501   15495 out.go:270]   Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:38:55 addons-709833 kubelet[1210]: E0827 21:38:55.232460    1210 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	W0827 21:40:58.508509   15495 out.go:270]   Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	  Aug 27 21:39:00 addons-709833 kubelet[1210]: W0827 21:39:00.110557    1210 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-709833" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-709833' and this object
	W0827 21:40:58.508516   15495 out.go:270]   Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	  Aug 27 21:39:00 addons-709833 kubelet[1210]: E0827 21:39:00.110595    1210 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-709833\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-709833' and this object" logger="UnhandledError"
	I0827 21:40:58.508523   15495 out.go:358] Setting ErrFile to fd 2...
	I0827 21:40:58.508529   15495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:41:08.509628   15495 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0827 21:41:08.513999   15495 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0827 21:41:08.515016   15495 api_server.go:141] control plane version: v1.31.0
	I0827 21:41:08.515042   15495 api_server.go:131] duration metric: took 11.79883667s to wait for apiserver health ...
	I0827 21:41:08.515051   15495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 21:41:08.515076   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0827 21:41:08.515144   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 21:41:08.562679   15495 cri.go:89] found id: "2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:41:08.562701   15495 cri.go:89] found id: ""
	I0827 21:41:08.562709   15495 logs.go:276] 1 containers: [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676]
	I0827 21:41:08.562786   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.568259   15495 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0827 21:41:08.568355   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 21:41:08.616731   15495 cri.go:89] found id: "bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:41:08.616753   15495 cri.go:89] found id: ""
	I0827 21:41:08.616761   15495 logs.go:276] 1 containers: [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198]
	I0827 21:41:08.616810   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.621412   15495 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0827 21:41:08.621473   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 21:41:08.665489   15495 cri.go:89] found id: "ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:41:08.665515   15495 cri.go:89] found id: ""
	I0827 21:41:08.665524   15495 logs.go:276] 1 containers: [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350]
	I0827 21:41:08.665569   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.669669   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0827 21:41:08.669723   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 21:41:08.727141   15495 cri.go:89] found id: "09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:41:08.727165   15495 cri.go:89] found id: ""
	I0827 21:41:08.727175   15495 logs.go:276] 1 containers: [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060]
	I0827 21:41:08.727223   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.731280   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0827 21:41:08.731347   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 21:41:08.766949   15495 cri.go:89] found id: "fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:41:08.766976   15495 cri.go:89] found id: ""
	I0827 21:41:08.766986   15495 logs.go:276] 1 containers: [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de]
	I0827 21:41:08.767032   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.770843   15495 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 21:41:08.770913   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 21:41:08.807417   15495 cri.go:89] found id: "d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:41:08.807438   15495 cri.go:89] found id: ""
	I0827 21:41:08.807447   15495 logs.go:276] 1 containers: [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3]
	I0827 21:41:08.807490   15495 ssh_runner.go:195] Run: which crictl
	I0827 21:41:08.811355   15495 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0827 21:41:08.811412   15495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 21:41:08.847438   15495 cri.go:89] found id: ""
	I0827 21:41:08.847467   15495 logs.go:276] 0 containers: []
	W0827 21:41:08.847478   15495 logs.go:278] No container was found matching "kindnet"
	I0827 21:41:08.847489   15495 logs.go:123] Gathering logs for kube-proxy [fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de] ...
	I0827 21:41:08.847503   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbf58bd95553adce7540adddd54a49e9b77b362a59ca387f3238f5666d74c8de"
	I0827 21:41:08.881944   15495 logs.go:123] Gathering logs for describe nodes ...
	I0827 21:41:08.881972   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 21:41:08.992626   15495 logs.go:123] Gathering logs for kube-scheduler [09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060] ...
	I0827 21:41:08.992654   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09f373bbc5000835f54c7049ee3d5065c4ecd31283fca3522796bba4e00bb060"
	I0827 21:41:09.041326   15495 logs.go:123] Gathering logs for kube-apiserver [2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676] ...
	I0827 21:41:09.041363   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a8a781267dc7b1438ff45edd032b9bcd88102391ff44171769755d4d99a2676"
	I0827 21:41:09.087059   15495 logs.go:123] Gathering logs for etcd [bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198] ...
	I0827 21:41:09.087094   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab031faca02694a39c1200b7981db84b98e7864593fa4c77f839a397f8c0198"
	I0827 21:41:09.151476   15495 logs.go:123] Gathering logs for coredns [ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350] ...
	I0827 21:41:09.151507   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec2ef5f181d5f3732ab481b89df8166d9b5c8d582cb89a2f9d1f6ecd05f85350"
	I0827 21:41:09.187322   15495 logs.go:123] Gathering logs for kube-controller-manager [d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3] ...
	I0827 21:41:09.187351   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d83c2692cce8c4458eafdf5445ab48a23526e7cd8e44568eaab058a1bb4c3aa3"
	I0827 21:41:09.251664   15495 logs.go:123] Gathering logs for CRI-O ...
	I0827 21:41:09.251700   15495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-709833 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 node stop m02 -v=7 --alsologtostderr
E0827 22:26:41.743750   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:27:02.225699   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:27:43.187890   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.470006034s)

                                                
                                                
-- stdout --
	* Stopping node "ha-158602-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:26:38.031405   33400 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:26:38.031570   33400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:26:38.031581   33400 out.go:358] Setting ErrFile to fd 2...
	I0827 22:26:38.031587   33400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:26:38.031751   33400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:26:38.032048   33400 mustload.go:65] Loading cluster: ha-158602
	I0827 22:26:38.032496   33400 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:26:38.032518   33400 stop.go:39] StopHost: ha-158602-m02
	I0827 22:26:38.032914   33400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:26:38.032961   33400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:26:38.049385   33400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40741
	I0827 22:26:38.049838   33400 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:26:38.050365   33400 main.go:141] libmachine: Using API Version  1
	I0827 22:26:38.050395   33400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:26:38.050706   33400 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:26:38.052995   33400 out.go:177] * Stopping node "ha-158602-m02"  ...
	I0827 22:26:38.054362   33400 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 22:26:38.054386   33400 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:26:38.054584   33400 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 22:26:38.054629   33400 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:26:38.057824   33400 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:26:38.058236   33400 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:26:38.058279   33400 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:26:38.058366   33400 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:26:38.058502   33400 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:26:38.058655   33400 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:26:38.058780   33400 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:26:38.139539   33400 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0827 22:26:38.195240   33400 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0827 22:26:38.250691   33400 main.go:141] libmachine: Stopping "ha-158602-m02"...
	I0827 22:26:38.250735   33400 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:26:38.252537   33400 main.go:141] libmachine: (ha-158602-m02) Calling .Stop
	I0827 22:26:38.256731   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 0/120
	I0827 22:26:39.258103   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 1/120
	I0827 22:26:40.259526   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 2/120
	I0827 22:26:41.261065   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 3/120
	I0827 22:26:42.262543   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 4/120
	I0827 22:26:43.264593   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 5/120
	I0827 22:26:44.265778   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 6/120
	I0827 22:26:45.267148   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 7/120
	I0827 22:26:46.268826   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 8/120
	I0827 22:26:47.271163   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 9/120
	I0827 22:26:48.273627   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 10/120
	I0827 22:26:49.274888   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 11/120
	I0827 22:26:50.276355   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 12/120
	I0827 22:26:51.278014   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 13/120
	I0827 22:26:52.279352   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 14/120
	I0827 22:26:53.281282   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 15/120
	I0827 22:26:54.282911   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 16/120
	I0827 22:26:55.284176   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 17/120
	I0827 22:26:56.285507   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 18/120
	I0827 22:26:57.286850   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 19/120
	I0827 22:26:58.288947   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 20/120
	I0827 22:26:59.290433   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 21/120
	I0827 22:27:00.291783   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 22/120
	I0827 22:27:01.293140   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 23/120
	I0827 22:27:02.295294   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 24/120
	I0827 22:27:03.297091   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 25/120
	I0827 22:27:04.299468   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 26/120
	I0827 22:27:05.300683   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 27/120
	I0827 22:27:06.302940   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 28/120
	I0827 22:27:07.305217   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 29/120
	I0827 22:27:08.307625   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 30/120
	I0827 22:27:09.309692   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 31/120
	I0827 22:27:10.311361   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 32/120
	I0827 22:27:11.312700   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 33/120
	I0827 22:27:12.314951   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 34/120
	I0827 22:27:13.316780   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 35/120
	I0827 22:27:14.318959   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 36/120
	I0827 22:27:15.320376   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 37/120
	I0827 22:27:16.321760   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 38/120
	I0827 22:27:17.323236   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 39/120
	I0827 22:27:18.325470   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 40/120
	I0827 22:27:19.327019   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 41/120
	I0827 22:27:20.328632   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 42/120
	I0827 22:27:21.331062   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 43/120
	I0827 22:27:22.333241   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 44/120
	I0827 22:27:23.335348   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 45/120
	I0827 22:27:24.336913   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 46/120
	I0827 22:27:25.339172   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 47/120
	I0827 22:27:26.341627   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 48/120
	I0827 22:27:27.342877   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 49/120
	I0827 22:27:28.344373   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 50/120
	I0827 22:27:29.345844   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 51/120
	I0827 22:27:30.347269   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 52/120
	I0827 22:27:31.348908   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 53/120
	I0827 22:27:32.350370   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 54/120
	I0827 22:27:33.352276   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 55/120
	I0827 22:27:34.354757   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 56/120
	I0827 22:27:35.356118   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 57/120
	I0827 22:27:36.357564   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 58/120
	I0827 22:27:37.359702   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 59/120
	I0827 22:27:38.361828   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 60/120
	I0827 22:27:39.364052   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 61/120
	I0827 22:27:40.366005   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 62/120
	I0827 22:27:41.367257   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 63/120
	I0827 22:27:42.368772   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 64/120
	I0827 22:27:43.370555   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 65/120
	I0827 22:27:44.371894   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 66/120
	I0827 22:27:45.373170   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 67/120
	I0827 22:27:46.375691   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 68/120
	I0827 22:27:47.377047   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 69/120
	I0827 22:27:48.378930   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 70/120
	I0827 22:27:49.380317   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 71/120
	I0827 22:27:50.381533   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 72/120
	I0827 22:27:51.383129   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 73/120
	I0827 22:27:52.384603   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 74/120
	I0827 22:27:53.386476   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 75/120
	I0827 22:27:54.387943   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 76/120
	I0827 22:27:55.389742   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 77/120
	I0827 22:27:56.390923   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 78/120
	I0827 22:27:57.392372   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 79/120
	I0827 22:27:58.394511   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 80/120
	I0827 22:27:59.396078   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 81/120
	I0827 22:28:00.397553   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 82/120
	I0827 22:28:01.399096   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 83/120
	I0827 22:28:02.401063   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 84/120
	I0827 22:28:03.403163   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 85/120
	I0827 22:28:04.404345   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 86/120
	I0827 22:28:05.405640   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 87/120
	I0827 22:28:06.407142   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 88/120
	I0827 22:28:07.408538   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 89/120
	I0827 22:28:08.411158   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 90/120
	I0827 22:28:09.412722   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 91/120
	I0827 22:28:10.414332   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 92/120
	I0827 22:28:11.415829   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 93/120
	I0827 22:28:12.417341   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 94/120
	I0827 22:28:13.419090   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 95/120
	I0827 22:28:14.420591   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 96/120
	I0827 22:28:15.421829   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 97/120
	I0827 22:28:16.422982   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 98/120
	I0827 22:28:17.424536   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 99/120
	I0827 22:28:18.426686   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 100/120
	I0827 22:28:19.428185   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 101/120
	I0827 22:28:20.429853   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 102/120
	I0827 22:28:21.431231   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 103/120
	I0827 22:28:22.432816   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 104/120
	I0827 22:28:23.435264   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 105/120
	I0827 22:28:24.437545   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 106/120
	I0827 22:28:25.439754   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 107/120
	I0827 22:28:26.441363   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 108/120
	I0827 22:28:27.442939   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 109/120
	I0827 22:28:28.444737   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 110/120
	I0827 22:28:29.446114   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 111/120
	I0827 22:28:30.447621   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 112/120
	I0827 22:28:31.449062   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 113/120
	I0827 22:28:32.451022   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 114/120
	I0827 22:28:33.453020   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 115/120
	I0827 22:28:34.454936   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 116/120
	I0827 22:28:35.456255   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 117/120
	I0827 22:28:36.457681   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 118/120
	I0827 22:28:37.459153   33400 main.go:141] libmachine: (ha-158602-m02) Waiting for machine to stop 119/120
	I0827 22:28:38.460362   33400 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0827 22:28:38.460507   33400 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-158602 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (19.079972755s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:28:38.504689   33848 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:28:38.504818   33848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:28:38.504829   33848 out.go:358] Setting ErrFile to fd 2...
	I0827 22:28:38.504836   33848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:28:38.505066   33848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:28:38.505269   33848 out.go:352] Setting JSON to false
	I0827 22:28:38.505292   33848 mustload.go:65] Loading cluster: ha-158602
	I0827 22:28:38.505340   33848 notify.go:220] Checking for updates...
	I0827 22:28:38.505663   33848 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:28:38.505681   33848 status.go:255] checking status of ha-158602 ...
	I0827 22:28:38.506138   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.506197   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.521738   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I0827 22:28:38.522235   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.522862   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.522897   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.523190   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.523365   33848 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:28:38.525030   33848 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:28:38.525049   33848 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:28:38.525357   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.525398   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.539755   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0827 22:28:38.540132   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.540538   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.540568   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.540844   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.541026   33848 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:28:38.543814   33848 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:28:38.544274   33848 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:28:38.544296   33848 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:28:38.544424   33848 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:28:38.544745   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.544778   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.559451   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0827 22:28:38.559853   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.560302   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.560325   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.560771   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.560965   33848 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:28:38.561165   33848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:28:38.561207   33848 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:28:38.564340   33848 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:28:38.564801   33848 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:28:38.564827   33848 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:28:38.565030   33848 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:28:38.565205   33848 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:28:38.565407   33848 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:28:38.565560   33848 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:28:38.653804   33848 ssh_runner.go:195] Run: systemctl --version
	I0827 22:28:38.661646   33848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:28:38.678281   33848 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:28:38.678314   33848 api_server.go:166] Checking apiserver status ...
	I0827 22:28:38.678364   33848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:28:38.693448   33848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:28:38.702712   33848 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:28:38.702759   33848 ssh_runner.go:195] Run: ls
	I0827 22:28:38.707120   33848 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:28:38.713385   33848 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:28:38.713406   33848 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:28:38.713422   33848 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:28:38.713443   33848 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:28:38.713762   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.713796   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.728544   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0827 22:28:38.729166   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.729757   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.729779   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.730103   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.730302   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:28:38.732076   33848 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:28:38.732094   33848 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:28:38.732505   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.732549   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.747338   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0827 22:28:38.747829   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.748379   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.748409   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.748988   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.749177   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:28:38.752204   33848 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:28:38.752628   33848 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:28:38.752648   33848 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:28:38.752826   33848 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:28:38.753234   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:38.753275   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:38.768547   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
	I0827 22:28:38.768948   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:38.769480   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:38.769496   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:38.769754   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:38.769932   33848 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:28:38.770096   33848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:28:38.770115   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:28:38.772626   33848 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:28:38.772965   33848 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:28:38.772993   33848 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:28:38.773119   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:28:38.773266   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:28:38.773428   33848 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:28:38.773567   33848 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:28:57.188808   33848 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:28:57.188899   33848 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:28:57.188914   33848 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:28:57.188921   33848 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:28:57.188944   33848 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:28:57.188951   33848 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:28:57.189232   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.189268   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.204267   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I0827 22:28:57.204721   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.205155   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.205176   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.205507   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.205728   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:28:57.207232   33848 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:28:57.207245   33848 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:28:57.207558   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.207598   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.222576   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0827 22:28:57.222955   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.223385   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.223403   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.223750   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.223936   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:28:57.226559   33848 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:28:57.226976   33848 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:28:57.226995   33848 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:28:57.227179   33848 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:28:57.227580   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.227621   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.242131   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0827 22:28:57.242464   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.242897   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.242915   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.243220   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.243405   33848 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:28:57.243558   33848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:28:57.243589   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:28:57.246488   33848 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:28:57.246882   33848 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:28:57.246900   33848 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:28:57.247060   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:28:57.247184   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:28:57.247300   33848 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:28:57.247464   33848 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:28:57.329279   33848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:28:57.347019   33848 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:28:57.347045   33848 api_server.go:166] Checking apiserver status ...
	I0827 22:28:57.347075   33848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:28:57.362522   33848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:28:57.374854   33848 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:28:57.374946   33848 ssh_runner.go:195] Run: ls
	I0827 22:28:57.379890   33848 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:28:57.384141   33848 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:28:57.384162   33848 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:28:57.384169   33848 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:28:57.384186   33848 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:28:57.384460   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.384535   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.399681   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I0827 22:28:57.400099   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.400613   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.400631   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.400900   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.401080   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:28:57.402534   33848 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:28:57.402550   33848 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:28:57.402834   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.402873   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.417768   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0827 22:28:57.418214   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.418679   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.418701   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.419017   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.419187   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:28:57.421899   33848 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:28:57.422351   33848 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:28:57.422378   33848 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:28:57.422548   33848 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:28:57.422836   33848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:28:57.422870   33848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:28:57.437592   33848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0827 22:28:57.438088   33848 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:28:57.438684   33848 main.go:141] libmachine: Using API Version  1
	I0827 22:28:57.438709   33848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:28:57.439014   33848 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:28:57.439194   33848 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:28:57.439382   33848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:28:57.439401   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:28:57.442283   33848 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:28:57.442767   33848 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:28:57.442786   33848 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:28:57.442949   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:28:57.443131   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:28:57.443266   33848 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:28:57.443426   33848 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:28:57.524739   33848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:28:57.539992   33848 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-158602 -n ha-158602
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-158602 logs -n 25: (1.325444296s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m03_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m04 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp testdata/cp-test.txt                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m04_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03:/home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m03 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-158602 node stop m02 -v=7                                                     | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:22:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:22:05.725091   29384 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:22:05.725198   29384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:22:05.725207   29384 out.go:358] Setting ErrFile to fd 2...
	I0827 22:22:05.725211   29384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:22:05.725395   29384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:22:05.725951   29384 out.go:352] Setting JSON to false
	I0827 22:22:05.726785   29384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3873,"bootTime":1724793453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:22:05.726843   29384 start.go:139] virtualization: kvm guest
	I0827 22:22:05.728938   29384 out.go:177] * [ha-158602] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:22:05.730144   29384 notify.go:220] Checking for updates...
	I0827 22:22:05.730158   29384 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:22:05.731229   29384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:22:05.732370   29384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:05.733494   29384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:05.734563   29384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:22:05.735662   29384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:22:05.736957   29384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:22:05.770377   29384 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 22:22:05.771555   29384 start.go:297] selected driver: kvm2
	I0827 22:22:05.771570   29384 start.go:901] validating driver "kvm2" against <nil>
	I0827 22:22:05.771585   29384 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:22:05.772234   29384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:22:05.772301   29384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:22:05.786773   29384 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:22:05.786811   29384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 22:22:05.787000   29384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:22:05.787063   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:05.787074   29384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0827 22:22:05.787080   29384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 22:22:05.787126   29384 start.go:340] cluster config:
	{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0827 22:22:05.787229   29384 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:22:05.788962   29384 out.go:177] * Starting "ha-158602" primary control-plane node in "ha-158602" cluster
	I0827 22:22:05.790185   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:05.790216   29384 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:22:05.790227   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:22:05.790298   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:22:05.790308   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:22:05.790581   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:05.790598   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json: {Name:mkfa8fe80ca5d9f0499f17034da7769023bc4dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:05.790717   29384 start.go:360] acquireMachinesLock for ha-158602: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:22:05.790744   29384 start.go:364] duration metric: took 14.385µs to acquireMachinesLock for "ha-158602"
	I0827 22:22:05.790759   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:05.790813   29384 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 22:22:05.792317   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:22:05.792451   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:05.792505   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:05.806240   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0827 22:22:05.806635   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:05.807149   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:05.807199   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:05.807494   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:05.807666   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:05.807803   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:05.807933   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:22:05.807959   29384 client.go:168] LocalClient.Create starting
	I0827 22:22:05.807993   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:22:05.808031   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:05.808049   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:05.808110   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:22:05.808137   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:05.808154   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:05.808177   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:22:05.808195   29384 main.go:141] libmachine: (ha-158602) Calling .PreCreateCheck
	I0827 22:22:05.808508   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:05.808908   29384 main.go:141] libmachine: Creating machine...
	I0827 22:22:05.808923   29384 main.go:141] libmachine: (ha-158602) Calling .Create
	I0827 22:22:05.809055   29384 main.go:141] libmachine: (ha-158602) Creating KVM machine...
	I0827 22:22:05.810075   29384 main.go:141] libmachine: (ha-158602) DBG | found existing default KVM network
	I0827 22:22:05.810681   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:05.810546   29407 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0827 22:22:05.810700   29384 main.go:141] libmachine: (ha-158602) DBG | created network xml: 
	I0827 22:22:05.810711   29384 main.go:141] libmachine: (ha-158602) DBG | <network>
	I0827 22:22:05.810726   29384 main.go:141] libmachine: (ha-158602) DBG |   <name>mk-ha-158602</name>
	I0827 22:22:05.810737   29384 main.go:141] libmachine: (ha-158602) DBG |   <dns enable='no'/>
	I0827 22:22:05.810749   29384 main.go:141] libmachine: (ha-158602) DBG |   
	I0827 22:22:05.810764   29384 main.go:141] libmachine: (ha-158602) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0827 22:22:05.810773   29384 main.go:141] libmachine: (ha-158602) DBG |     <dhcp>
	I0827 22:22:05.810788   29384 main.go:141] libmachine: (ha-158602) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0827 22:22:05.810795   29384 main.go:141] libmachine: (ha-158602) DBG |     </dhcp>
	I0827 22:22:05.810802   29384 main.go:141] libmachine: (ha-158602) DBG |   </ip>
	I0827 22:22:05.810810   29384 main.go:141] libmachine: (ha-158602) DBG |   
	I0827 22:22:05.810818   29384 main.go:141] libmachine: (ha-158602) DBG | </network>
	I0827 22:22:05.810831   29384 main.go:141] libmachine: (ha-158602) DBG | 
	I0827 22:22:05.815706   29384 main.go:141] libmachine: (ha-158602) DBG | trying to create private KVM network mk-ha-158602 192.168.39.0/24...
	I0827 22:22:05.877509   29384 main.go:141] libmachine: (ha-158602) DBG | private KVM network mk-ha-158602 192.168.39.0/24 created
	I0827 22:22:05.877546   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:05.877474   29407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:05.877558   29384 main.go:141] libmachine: (ha-158602) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 ...
	I0827 22:22:05.877582   29384 main.go:141] libmachine: (ha-158602) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:22:05.877629   29384 main.go:141] libmachine: (ha-158602) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:22:06.119558   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.119445   29407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa...
	I0827 22:22:06.271755   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.271633   29407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/ha-158602.rawdisk...
	I0827 22:22:06.271777   29384 main.go:141] libmachine: (ha-158602) DBG | Writing magic tar header
	I0827 22:22:06.271787   29384 main.go:141] libmachine: (ha-158602) DBG | Writing SSH key tar header
	I0827 22:22:06.271795   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.271742   29407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 ...
	I0827 22:22:06.271865   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 (perms=drwx------)
	I0827 22:22:06.271876   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:22:06.271902   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:22:06.271922   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:22:06.271932   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602
	I0827 22:22:06.271940   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:22:06.271949   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:06.271956   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:22:06.271971   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:22:06.271982   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:22:06.271990   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:22:06.271997   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:22:06.272005   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home
	I0827 22:22:06.272012   29384 main.go:141] libmachine: (ha-158602) DBG | Skipping /home - not owner
	I0827 22:22:06.272020   29384 main.go:141] libmachine: (ha-158602) Creating domain...
	I0827 22:22:06.273037   29384 main.go:141] libmachine: (ha-158602) define libvirt domain using xml: 
	I0827 22:22:06.273062   29384 main.go:141] libmachine: (ha-158602) <domain type='kvm'>
	I0827 22:22:06.273073   29384 main.go:141] libmachine: (ha-158602)   <name>ha-158602</name>
	I0827 22:22:06.273085   29384 main.go:141] libmachine: (ha-158602)   <memory unit='MiB'>2200</memory>
	I0827 22:22:06.273105   29384 main.go:141] libmachine: (ha-158602)   <vcpu>2</vcpu>
	I0827 22:22:06.273123   29384 main.go:141] libmachine: (ha-158602)   <features>
	I0827 22:22:06.273130   29384 main.go:141] libmachine: (ha-158602)     <acpi/>
	I0827 22:22:06.273137   29384 main.go:141] libmachine: (ha-158602)     <apic/>
	I0827 22:22:06.273145   29384 main.go:141] libmachine: (ha-158602)     <pae/>
	I0827 22:22:06.273158   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273168   29384 main.go:141] libmachine: (ha-158602)   </features>
	I0827 22:22:06.273176   29384 main.go:141] libmachine: (ha-158602)   <cpu mode='host-passthrough'>
	I0827 22:22:06.273189   29384 main.go:141] libmachine: (ha-158602)   
	I0827 22:22:06.273196   29384 main.go:141] libmachine: (ha-158602)   </cpu>
	I0827 22:22:06.273250   29384 main.go:141] libmachine: (ha-158602)   <os>
	I0827 22:22:06.273273   29384 main.go:141] libmachine: (ha-158602)     <type>hvm</type>
	I0827 22:22:06.273283   29384 main.go:141] libmachine: (ha-158602)     <boot dev='cdrom'/>
	I0827 22:22:06.273295   29384 main.go:141] libmachine: (ha-158602)     <boot dev='hd'/>
	I0827 22:22:06.273306   29384 main.go:141] libmachine: (ha-158602)     <bootmenu enable='no'/>
	I0827 22:22:06.273315   29384 main.go:141] libmachine: (ha-158602)   </os>
	I0827 22:22:06.273323   29384 main.go:141] libmachine: (ha-158602)   <devices>
	I0827 22:22:06.273331   29384 main.go:141] libmachine: (ha-158602)     <disk type='file' device='cdrom'>
	I0827 22:22:06.273341   29384 main.go:141] libmachine: (ha-158602)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/boot2docker.iso'/>
	I0827 22:22:06.273356   29384 main.go:141] libmachine: (ha-158602)       <target dev='hdc' bus='scsi'/>
	I0827 22:22:06.273389   29384 main.go:141] libmachine: (ha-158602)       <readonly/>
	I0827 22:22:06.273408   29384 main.go:141] libmachine: (ha-158602)     </disk>
	I0827 22:22:06.273422   29384 main.go:141] libmachine: (ha-158602)     <disk type='file' device='disk'>
	I0827 22:22:06.273435   29384 main.go:141] libmachine: (ha-158602)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:22:06.273452   29384 main.go:141] libmachine: (ha-158602)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/ha-158602.rawdisk'/>
	I0827 22:22:06.273464   29384 main.go:141] libmachine: (ha-158602)       <target dev='hda' bus='virtio'/>
	I0827 22:22:06.273474   29384 main.go:141] libmachine: (ha-158602)     </disk>
	I0827 22:22:06.273485   29384 main.go:141] libmachine: (ha-158602)     <interface type='network'>
	I0827 22:22:06.273497   29384 main.go:141] libmachine: (ha-158602)       <source network='mk-ha-158602'/>
	I0827 22:22:06.273510   29384 main.go:141] libmachine: (ha-158602)       <model type='virtio'/>
	I0827 22:22:06.273521   29384 main.go:141] libmachine: (ha-158602)     </interface>
	I0827 22:22:06.273533   29384 main.go:141] libmachine: (ha-158602)     <interface type='network'>
	I0827 22:22:06.273542   29384 main.go:141] libmachine: (ha-158602)       <source network='default'/>
	I0827 22:22:06.273554   29384 main.go:141] libmachine: (ha-158602)       <model type='virtio'/>
	I0827 22:22:06.273576   29384 main.go:141] libmachine: (ha-158602)     </interface>
	I0827 22:22:06.273592   29384 main.go:141] libmachine: (ha-158602)     <serial type='pty'>
	I0827 22:22:06.273602   29384 main.go:141] libmachine: (ha-158602)       <target port='0'/>
	I0827 22:22:06.273608   29384 main.go:141] libmachine: (ha-158602)     </serial>
	I0827 22:22:06.273616   29384 main.go:141] libmachine: (ha-158602)     <console type='pty'>
	I0827 22:22:06.273629   29384 main.go:141] libmachine: (ha-158602)       <target type='serial' port='0'/>
	I0827 22:22:06.273640   29384 main.go:141] libmachine: (ha-158602)     </console>
	I0827 22:22:06.273653   29384 main.go:141] libmachine: (ha-158602)     <rng model='virtio'>
	I0827 22:22:06.273665   29384 main.go:141] libmachine: (ha-158602)       <backend model='random'>/dev/random</backend>
	I0827 22:22:06.273682   29384 main.go:141] libmachine: (ha-158602)     </rng>
	I0827 22:22:06.273711   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273723   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273730   29384 main.go:141] libmachine: (ha-158602)   </devices>
	I0827 22:22:06.273740   29384 main.go:141] libmachine: (ha-158602) </domain>
	I0827 22:22:06.273748   29384 main.go:141] libmachine: (ha-158602) 
	I0827 22:22:06.277981   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:88:a1:82 in network default
	I0827 22:22:06.278502   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:06.278514   29384 main.go:141] libmachine: (ha-158602) Ensuring networks are active...
	I0827 22:22:06.279216   29384 main.go:141] libmachine: (ha-158602) Ensuring network default is active
	I0827 22:22:06.279594   29384 main.go:141] libmachine: (ha-158602) Ensuring network mk-ha-158602 is active
	I0827 22:22:06.280161   29384 main.go:141] libmachine: (ha-158602) Getting domain xml...
	I0827 22:22:06.280932   29384 main.go:141] libmachine: (ha-158602) Creating domain...
	I0827 22:22:07.467089   29384 main.go:141] libmachine: (ha-158602) Waiting to get IP...
	I0827 22:22:07.467844   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.468192   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.468236   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.468189   29407 retry.go:31] will retry after 194.265732ms: waiting for machine to come up
	I0827 22:22:07.663504   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.663919   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.663939   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.663867   29407 retry.go:31] will retry after 270.765071ms: waiting for machine to come up
	I0827 22:22:07.937608   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.938086   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.938109   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.938043   29407 retry.go:31] will retry after 339.340195ms: waiting for machine to come up
	I0827 22:22:08.278496   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:08.278863   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:08.278880   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:08.278827   29407 retry.go:31] will retry after 514.863902ms: waiting for machine to come up
	I0827 22:22:08.795484   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:08.795916   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:08.795944   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:08.795873   29407 retry.go:31] will retry after 630.596256ms: waiting for machine to come up
	I0827 22:22:09.427625   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:09.428002   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:09.428027   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:09.427950   29407 retry.go:31] will retry after 906.309617ms: waiting for machine to come up
	I0827 22:22:10.336015   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:10.336420   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:10.336513   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:10.336396   29407 retry.go:31] will retry after 810.130306ms: waiting for machine to come up
	I0827 22:22:11.147751   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:11.148358   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:11.148404   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:11.148325   29407 retry.go:31] will retry after 1.037475417s: waiting for machine to come up
	I0827 22:22:12.187573   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:12.188125   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:12.188164   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:12.187954   29407 retry.go:31] will retry after 1.741861845s: waiting for machine to come up
	I0827 22:22:13.931937   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:13.932385   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:13.932415   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:13.932334   29407 retry.go:31] will retry after 2.17941581s: waiting for machine to come up
	I0827 22:22:16.113939   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:16.114420   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:16.114449   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:16.114352   29407 retry.go:31] will retry after 2.318053422s: waiting for machine to come up
	I0827 22:22:18.435855   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:18.436172   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:18.436193   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:18.436133   29407 retry.go:31] will retry after 2.715139833s: waiting for machine to come up
	I0827 22:22:21.152530   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:21.152930   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:21.152959   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:21.152883   29407 retry.go:31] will retry after 3.047166733s: waiting for machine to come up
	I0827 22:22:24.203998   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:24.204352   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:24.204375   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:24.204336   29407 retry.go:31] will retry after 4.148204433s: waiting for machine to come up
	I0827 22:22:28.355563   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.355978   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has current primary IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.356003   29384 main.go:141] libmachine: (ha-158602) Found IP for machine: 192.168.39.77
	I0827 22:22:28.356016   29384 main.go:141] libmachine: (ha-158602) Reserving static IP address...
	I0827 22:22:28.356292   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find host DHCP lease matching {name: "ha-158602", mac: "52:54:00:25:de:6a", ip: "192.168.39.77"} in network mk-ha-158602
	I0827 22:22:28.428664   29384 main.go:141] libmachine: (ha-158602) Reserved static IP address: 192.168.39.77
	I0827 22:22:28.428689   29384 main.go:141] libmachine: (ha-158602) Waiting for SSH to be available...
	I0827 22:22:28.428699   29384 main.go:141] libmachine: (ha-158602) DBG | Getting to WaitForSSH function...
	I0827 22:22:28.431057   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.431485   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.431516   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.431603   29384 main.go:141] libmachine: (ha-158602) DBG | Using SSH client type: external
	I0827 22:22:28.431638   29384 main.go:141] libmachine: (ha-158602) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa (-rw-------)
	I0827 22:22:28.431679   29384 main.go:141] libmachine: (ha-158602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:22:28.431696   29384 main.go:141] libmachine: (ha-158602) DBG | About to run SSH command:
	I0827 22:22:28.431712   29384 main.go:141] libmachine: (ha-158602) DBG | exit 0
	I0827 22:22:28.560450   29384 main.go:141] libmachine: (ha-158602) DBG | SSH cmd err, output: <nil>: 
	I0827 22:22:28.560805   29384 main.go:141] libmachine: (ha-158602) KVM machine creation complete!
	I0827 22:22:28.561127   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:28.561629   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:28.561874   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:28.562017   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:22:28.562034   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:28.563480   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:22:28.563494   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:22:28.563500   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:22:28.563506   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.565826   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.566247   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.566267   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.566440   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.566680   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.566852   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.567031   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.567196   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.567381   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.567394   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:22:28.675712   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:22:28.675738   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:22:28.675749   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.678641   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.679008   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.679039   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.679207   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.679414   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.679587   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.679800   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.679980   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.680216   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.680232   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:22:28.792985   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:22:28.793063   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:22:28.793076   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:22:28.793084   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:28.793353   29384 buildroot.go:166] provisioning hostname "ha-158602"
	I0827 22:22:28.793377   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:28.793549   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.796260   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.796636   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.796663   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.796788   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.796977   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.797137   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.797243   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.797430   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.797634   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.797653   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602 && echo "ha-158602" | sudo tee /etc/hostname
	I0827 22:22:28.923109   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:22:28.923134   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.926153   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.926503   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.926530   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.926699   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.926955   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.927131   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.927366   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.927515   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.927700   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.927716   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:22:29.048227   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:22:29.048253   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:22:29.048285   29384 buildroot.go:174] setting up certificates
	I0827 22:22:29.048294   29384 provision.go:84] configureAuth start
	I0827 22:22:29.048302   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:29.048596   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.051241   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.051578   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.051603   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.051768   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.054036   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.054563   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.054599   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.054709   29384 provision.go:143] copyHostCerts
	I0827 22:22:29.054733   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:22:29.054764   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:22:29.054780   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:22:29.054850   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:22:29.054937   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:22:29.054960   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:22:29.054970   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:22:29.054995   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:22:29.055073   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:22:29.055105   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:22:29.055115   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:22:29.055152   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:22:29.055222   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602 san=[127.0.0.1 192.168.39.77 ha-158602 localhost minikube]
	I0827 22:22:29.154469   29384 provision.go:177] copyRemoteCerts
	I0827 22:22:29.154522   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:22:29.154543   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.157356   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.157674   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.157696   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.157930   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.158112   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.158233   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.158366   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.242578   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:22:29.242638   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:22:29.265734   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:22:29.265816   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0827 22:22:29.288756   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:22:29.288828   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 22:22:29.311517   29384 provision.go:87] duration metric: took 263.210733ms to configureAuth
	I0827 22:22:29.311550   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:22:29.311770   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:29.311849   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.314644   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.314971   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.314997   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.315171   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.315372   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.315507   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.315617   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.315781   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:29.315958   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:29.315979   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:22:29.546170   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:22:29.546198   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:22:29.546209   29384 main.go:141] libmachine: (ha-158602) Calling .GetURL
	I0827 22:22:29.547671   29384 main.go:141] libmachine: (ha-158602) DBG | Using libvirt version 6000000
	I0827 22:22:29.549750   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.550027   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.550056   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.550171   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:22:29.550182   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:22:29.550197   29384 client.go:171] duration metric: took 23.742230676s to LocalClient.Create
	I0827 22:22:29.550222   29384 start.go:167] duration metric: took 23.742288109s to libmachine.API.Create "ha-158602"
	I0827 22:22:29.550231   29384 start.go:293] postStartSetup for "ha-158602" (driver="kvm2")
	I0827 22:22:29.550244   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:22:29.550264   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.550577   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:22:29.550600   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.552753   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.553090   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.553118   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.553195   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.553448   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.553620   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.553773   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.638774   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:22:29.642778   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:22:29.642806   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:22:29.642897   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:22:29.643004   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:22:29.643017   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:22:29.643159   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:22:29.652334   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:22:29.675090   29384 start.go:296] duration metric: took 124.845065ms for postStartSetup
	I0827 22:22:29.675136   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:29.675736   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.678241   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.678633   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.678660   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.678878   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:29.679066   29384 start.go:128] duration metric: took 23.888243916s to createHost
	I0827 22:22:29.679089   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.681377   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.681691   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.681716   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.681802   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.681977   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.682107   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.682257   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.682399   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:29.682549   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:29.682569   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:22:29.792862   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797349.771722306
	
	I0827 22:22:29.792895   29384 fix.go:216] guest clock: 1724797349.771722306
	I0827 22:22:29.792908   29384 fix.go:229] Guest: 2024-08-27 22:22:29.771722306 +0000 UTC Remote: 2024-08-27 22:22:29.679078204 +0000 UTC m=+23.987252558 (delta=92.644102ms)
	I0827 22:22:29.792938   29384 fix.go:200] guest clock delta is within tolerance: 92.644102ms
	I0827 22:22:29.792947   29384 start.go:83] releasing machines lock for "ha-158602", held for 24.00219403s
	I0827 22:22:29.792977   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.793232   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.795836   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.796182   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.796208   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.796387   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.796865   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.797060   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.797174   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:22:29.797220   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.797265   29384 ssh_runner.go:195] Run: cat /version.json
	I0827 22:22:29.797281   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.799931   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.799949   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800228   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.800285   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.800307   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800330   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800475   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.800620   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.800694   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.800786   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.800855   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.800912   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.800966   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.801020   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.913602   29384 ssh_runner.go:195] Run: systemctl --version
	I0827 22:22:29.919590   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:22:30.074495   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:22:30.079886   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:22:30.079939   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:22:30.094396   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:22:30.094422   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:22:30.094496   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:22:30.109029   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:22:30.122546   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:22:30.122642   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:22:30.136969   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:22:30.150178   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:22:30.259147   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:22:30.423027   29384 docker.go:233] disabling docker service ...
	I0827 22:22:30.423085   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:22:30.436430   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:22:30.448753   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:22:30.577789   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:22:30.700754   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:22:30.713801   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:22:30.732850   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:22:30.732912   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.744177   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:22:30.744243   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.755285   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.766141   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.777285   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:22:30.788436   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.799321   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.816622   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.827742   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:22:30.837836   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:22:30.837887   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:22:30.851354   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:22:30.861051   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:22:30.984839   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:22:31.074940   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:22:31.075021   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:22:31.079425   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:22:31.079483   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:22:31.083100   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:22:31.120413   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:22:31.120509   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:22:31.148060   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:22:31.175246   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:22:31.176367   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:31.178721   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:31.179040   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:31.179066   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:31.179249   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:22:31.182922   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:22:31.195106   29384 kubeadm.go:883] updating cluster {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:22:31.195214   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:31.195263   29384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:22:31.226670   29384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0827 22:22:31.226758   29384 ssh_runner.go:195] Run: which lz4
	I0827 22:22:31.230524   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0827 22:22:31.230640   29384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 22:22:31.234392   29384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 22:22:31.234422   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0827 22:22:32.368661   29384 crio.go:462] duration metric: took 1.13806452s to copy over tarball
	I0827 22:22:32.368736   29384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 22:22:34.354238   29384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.985475866s)
	I0827 22:22:34.354264   29384 crio.go:469] duration metric: took 1.985575846s to extract the tarball
	I0827 22:22:34.354270   29384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 22:22:34.390079   29384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:22:34.433362   29384 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:22:34.433387   29384 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:22:34.433397   29384 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.0 crio true true} ...
	I0827 22:22:34.433533   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:22:34.433623   29384 ssh_runner.go:195] Run: crio config
	I0827 22:22:34.477896   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:34.477915   29384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 22:22:34.477924   29384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:22:34.477943   29384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-158602 NodeName:ha-158602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:22:34.478089   29384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-158602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:22:34.478113   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:22:34.478157   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:22:34.493162   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:22:34.493306   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:22:34.493383   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:22:34.503023   29384 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:22:34.503082   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0827 22:22:34.512199   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0827 22:22:34.527196   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:22:34.541542   29384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0827 22:22:34.556176   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0827 22:22:34.573020   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:22:34.576515   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:22:34.587435   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:22:34.701038   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:22:34.716711   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.77
	I0827 22:22:34.716737   29384 certs.go:194] generating shared ca certs ...
	I0827 22:22:34.716757   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.716937   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:22:34.716984   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:22:34.716997   29384 certs.go:256] generating profile certs ...
	I0827 22:22:34.717046   29384 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:22:34.717072   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt with IP's: []
	I0827 22:22:34.818879   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt ...
	I0827 22:22:34.818905   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt: {Name:mkdf45df5f65fbc406507ea6a9494233f6ccc139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.819088   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key ...
	I0827 22:22:34.819101   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key: {Name:mka5ce0f67af3ce4732ca247b43e3fa8d39f7d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.819193   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a
	I0827 22:22:34.819217   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.254]
	I0827 22:22:34.864751   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a ...
	I0827 22:22:34.864777   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a: {Name:mkf15c8892d9da701cae3227207b1e68ca1f0830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.864921   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a ...
	I0827 22:22:34.864933   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a: {Name:mkaadc67dd86d52629334b484281a2a6fe7c5760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.865003   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:22:34.865081   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:22:34.865134   29384 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:22:34.865149   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt with IP's: []
	I0827 22:22:34.922123   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt ...
	I0827 22:22:34.922151   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt: {Name:mk7b73460f10a4c6e6831b9d583235ac67597a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.922296   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key ...
	I0827 22:22:34.922306   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key: {Name:mk0221e48cfc3cc05f388732951062f16a100d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.922377   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:22:34.922393   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:22:34.922403   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:22:34.922416   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:22:34.922426   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:22:34.922440   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:22:34.922453   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:22:34.922466   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:22:34.922508   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:22:34.922539   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:22:34.922547   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:22:34.922569   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:22:34.922600   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:22:34.922622   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:22:34.922658   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:22:34.922679   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:22:34.922689   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:22:34.922699   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:34.923196   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:22:34.947282   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:22:34.969654   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:22:34.991949   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:22:35.014109   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0827 22:22:35.036792   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:22:35.058733   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:22:35.080663   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:22:35.102279   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:22:35.124796   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:22:35.145735   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:22:35.166753   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:22:35.182273   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:22:35.187454   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:22:35.196900   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.200799   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.200842   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.206018   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:22:35.215519   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:22:35.225022   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.229043   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.229097   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.234398   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:22:35.243911   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:22:35.253612   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.257500   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.257543   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.262614   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:22:35.272027   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:22:35.275533   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:22:35.275603   29384 kubeadm.go:392] StartCluster: {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:22:35.275674   29384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:22:35.275735   29384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:22:35.309881   29384 cri.go:89] found id: ""
	I0827 22:22:35.309954   29384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 22:22:35.318972   29384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 22:22:35.327616   29384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 22:22:35.336267   29384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 22:22:35.336288   29384 kubeadm.go:157] found existing configuration files:
	
	I0827 22:22:35.336328   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 22:22:35.344725   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 22:22:35.344785   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 22:22:35.353189   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 22:22:35.361747   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 22:22:35.361797   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 22:22:35.370528   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 22:22:35.379013   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 22:22:35.379059   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 22:22:35.388154   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 22:22:35.396693   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 22:22:35.396747   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 22:22:35.405660   29384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 22:22:35.510317   29384 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0827 22:22:35.510446   29384 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 22:22:35.625850   29384 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 22:22:35.626003   29384 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 22:22:35.626109   29384 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0827 22:22:35.636040   29384 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 22:22:35.638828   29384 out.go:235]   - Generating certificates and keys ...
	I0827 22:22:35.638931   29384 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 22:22:35.639011   29384 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 22:22:35.765494   29384 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 22:22:35.847870   29384 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 22:22:35.951048   29384 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 22:22:36.106009   29384 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 22:22:36.255065   29384 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 22:22:36.255236   29384 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-158602 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0827 22:22:36.328842   29384 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 22:22:36.329019   29384 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-158602 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0827 22:22:36.391948   29384 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 22:22:36.486461   29384 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 22:22:36.622616   29384 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 22:22:36.622853   29384 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 22:22:37.182141   29384 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 22:22:37.329148   29384 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0827 22:22:37.487447   29384 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 22:22:37.611584   29384 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 22:22:37.725021   29384 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 22:22:37.725712   29384 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 22:22:37.728853   29384 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 22:22:37.838718   29384 out.go:235]   - Booting up control plane ...
	I0827 22:22:37.838841   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 22:22:37.838942   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 22:22:37.839019   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 22:22:37.839141   29384 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 22:22:37.839260   29384 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 22:22:37.839324   29384 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 22:22:37.889251   29384 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0827 22:22:37.889444   29384 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0827 22:22:38.390792   29384 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.677224ms
	I0827 22:22:38.390907   29384 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0827 22:22:44.335789   29384 kubeadm.go:310] [api-check] The API server is healthy after 5.948175854s
	I0827 22:22:44.351540   29384 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 22:22:44.369518   29384 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 22:22:44.904393   29384 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 22:22:44.904686   29384 kubeadm.go:310] [mark-control-plane] Marking the node ha-158602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 22:22:44.917380   29384 kubeadm.go:310] [bootstrap-token] Using token: 1ncx0g.2a6qvzpriwfvodsr
	I0827 22:22:44.918757   29384 out.go:235]   - Configuring RBAC rules ...
	I0827 22:22:44.918916   29384 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 22:22:44.928255   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 22:22:44.939201   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 22:22:44.942667   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 22:22:44.946407   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 22:22:44.950174   29384 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 22:22:44.965897   29384 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 22:22:45.211092   29384 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 22:22:45.742594   29384 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 22:22:45.744221   29384 kubeadm.go:310] 
	I0827 22:22:45.744283   29384 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 22:22:45.744291   29384 kubeadm.go:310] 
	I0827 22:22:45.744415   29384 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 22:22:45.744435   29384 kubeadm.go:310] 
	I0827 22:22:45.744478   29384 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 22:22:45.744555   29384 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 22:22:45.744621   29384 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 22:22:45.744634   29384 kubeadm.go:310] 
	I0827 22:22:45.744710   29384 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 22:22:45.744726   29384 kubeadm.go:310] 
	I0827 22:22:45.744797   29384 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 22:22:45.744811   29384 kubeadm.go:310] 
	I0827 22:22:45.744892   29384 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 22:22:45.744987   29384 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 22:22:45.745081   29384 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 22:22:45.745093   29384 kubeadm.go:310] 
	I0827 22:22:45.745207   29384 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 22:22:45.745315   29384 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 22:22:45.745327   29384 kubeadm.go:310] 
	I0827 22:22:45.745437   29384 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ncx0g.2a6qvzpriwfvodsr \
	I0827 22:22:45.745566   29384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 \
	I0827 22:22:45.745598   29384 kubeadm.go:310] 	--control-plane 
	I0827 22:22:45.745605   29384 kubeadm.go:310] 
	I0827 22:22:45.745692   29384 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 22:22:45.745698   29384 kubeadm.go:310] 
	I0827 22:22:45.745783   29384 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ncx0g.2a6qvzpriwfvodsr \
	I0827 22:22:45.745915   29384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 
	I0827 22:22:45.747849   29384 kubeadm.go:310] W0827 22:22:35.491816     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:22:45.748216   29384 kubeadm.go:310] W0827 22:22:35.492666     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:22:45.748351   29384 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 22:22:45.748379   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:45.748389   29384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 22:22:45.749954   29384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0827 22:22:45.751246   29384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0827 22:22:45.756717   29384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0827 22:22:45.756735   29384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0827 22:22:45.775957   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0827 22:22:46.130373   29384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 22:22:46.130446   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:46.130474   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602 minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=true
	I0827 22:22:46.326174   29384 ops.go:34] apiserver oom_adj: -16
	I0827 22:22:46.326200   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:46.826245   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:47.326346   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:47.826659   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:48.327250   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:48.826302   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.327122   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.826329   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.915000   29384 kubeadm.go:1113] duration metric: took 3.78461182s to wait for elevateKubeSystemPrivileges
	I0827 22:22:49.915028   29384 kubeadm.go:394] duration metric: took 14.63943765s to StartCluster
	I0827 22:22:49.915050   29384 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:49.915134   29384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:49.915793   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:49.916017   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0827 22:22:49.916028   29384 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 22:22:49.916087   29384 addons.go:69] Setting storage-provisioner=true in profile "ha-158602"
	I0827 22:22:49.916013   29384 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:49.916156   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:22:49.916119   29384 addons.go:69] Setting default-storageclass=true in profile "ha-158602"
	I0827 22:22:49.916211   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:49.916222   29384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-158602"
	I0827 22:22:49.916121   29384 addons.go:234] Setting addon storage-provisioner=true in "ha-158602"
	I0827 22:22:49.916289   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:22:49.916741   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.916778   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.916797   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.916828   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.931837   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0827 22:22:49.931983   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0827 22:22:49.932314   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.932433   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.932860   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.932885   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.932986   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.933006   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.933226   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.933334   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.933499   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.933794   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.933826   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.935547   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:49.935885   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 22:22:49.936488   29384 cert_rotation.go:140] Starting client certificate rotation controller
	I0827 22:22:49.936762   29384 addons.go:234] Setting addon default-storageclass=true in "ha-158602"
	I0827 22:22:49.936805   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:22:49.937200   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.937245   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.949831   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0827 22:22:49.950337   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.950827   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.950854   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.951203   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.951420   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.952187   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0827 22:22:49.952660   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.953109   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.953133   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.953268   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:49.953442   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.953888   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.953927   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.955405   29384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 22:22:49.956859   29384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:22:49.956875   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 22:22:49.956893   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:49.959686   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.960119   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:49.960145   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.960404   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:49.960585   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:49.960737   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:49.960904   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:49.974233   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0827 22:22:49.974626   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.975193   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.975221   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.975534   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.975748   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.977287   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:49.977497   29384 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 22:22:49.977513   29384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 22:22:49.977528   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:49.980128   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.980544   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:49.980571   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.980744   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:49.980922   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:49.981062   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:49.981196   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:50.062971   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0827 22:22:50.169857   29384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 22:22:50.183206   29384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:22:50.605591   29384 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0827 22:22:50.605615   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.605635   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.605930   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.605958   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.605970   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.605985   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.605996   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.606208   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.606220   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.606235   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.606277   29384 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 22:22:50.606293   29384 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 22:22:50.606391   29384 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0827 22:22:50.606398   29384 round_trippers.go:469] Request Headers:
	I0827 22:22:50.606406   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:22:50.606409   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:22:50.619083   29384 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0827 22:22:50.619591   29384 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0827 22:22:50.619606   29384 round_trippers.go:469] Request Headers:
	I0827 22:22:50.619625   29384 round_trippers.go:473]     Content-Type: application/json
	I0827 22:22:50.619629   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:22:50.619633   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:22:50.623083   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:22:50.623255   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.623268   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.623535   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.623553   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.623587   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.977205   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.977233   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.977539   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.977559   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.977569   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.977579   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.977813   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.977826   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.979490   29384 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0827 22:22:50.980882   29384 addons.go:510] duration metric: took 1.064849742s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0827 22:22:50.980912   29384 start.go:246] waiting for cluster config update ...
	I0827 22:22:50.980923   29384 start.go:255] writing updated cluster config ...
	I0827 22:22:50.982330   29384 out.go:201] 
	I0827 22:22:50.983724   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:50.983785   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:50.985492   29384 out.go:177] * Starting "ha-158602-m02" control-plane node in "ha-158602" cluster
	I0827 22:22:50.986474   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:50.986494   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:22:50.986581   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:22:50.986596   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:22:50.986663   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:50.986847   29384 start.go:360] acquireMachinesLock for ha-158602-m02: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:22:50.986893   29384 start.go:364] duration metric: took 25.735µs to acquireMachinesLock for "ha-158602-m02"
	I0827 22:22:50.986915   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:50.987012   29384 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0827 22:22:50.988953   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:22:50.989044   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:50.989075   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:51.003802   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0827 22:22:51.004211   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:51.004688   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:51.004709   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:51.004999   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:51.005166   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:22:51.005287   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:22:51.005453   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:22:51.005473   29384 client.go:168] LocalClient.Create starting
	I0827 22:22:51.005506   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:22:51.005543   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:51.005571   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:51.005642   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:22:51.005672   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:51.005689   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:51.005714   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:22:51.005727   29384 main.go:141] libmachine: (ha-158602-m02) Calling .PreCreateCheck
	I0827 22:22:51.005880   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:22:51.006209   29384 main.go:141] libmachine: Creating machine...
	I0827 22:22:51.006237   29384 main.go:141] libmachine: (ha-158602-m02) Calling .Create
	I0827 22:22:51.006350   29384 main.go:141] libmachine: (ha-158602-m02) Creating KVM machine...
	I0827 22:22:51.007588   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found existing default KVM network
	I0827 22:22:51.007721   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found existing private KVM network mk-ha-158602
	I0827 22:22:51.007864   29384 main.go:141] libmachine: (ha-158602-m02) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 ...
	I0827 22:22:51.007895   29384 main.go:141] libmachine: (ha-158602-m02) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:22:51.007962   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.007857   29745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:51.008102   29384 main.go:141] libmachine: (ha-158602-m02) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:22:51.244710   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.244579   29745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa...
	I0827 22:22:51.520653   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.520525   29745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/ha-158602-m02.rawdisk...
	I0827 22:22:51.520682   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Writing magic tar header
	I0827 22:22:51.520692   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Writing SSH key tar header
	I0827 22:22:51.520700   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.520661   29745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 ...
	I0827 22:22:51.520778   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02
	I0827 22:22:51.520828   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 (perms=drwx------)
	I0827 22:22:51.520856   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:22:51.520872   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:22:51.520888   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:51.520897   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:22:51.520908   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:22:51.520920   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:22:51.520933   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home
	I0827 22:22:51.520970   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:22:51.520986   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:22:51.520999   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:22:51.521015   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:22:51.521030   29384 main.go:141] libmachine: (ha-158602-m02) Creating domain...
	I0827 22:22:51.521041   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Skipping /home - not owner
	I0827 22:22:51.521890   29384 main.go:141] libmachine: (ha-158602-m02) define libvirt domain using xml: 
	I0827 22:22:51.521903   29384 main.go:141] libmachine: (ha-158602-m02) <domain type='kvm'>
	I0827 22:22:51.521910   29384 main.go:141] libmachine: (ha-158602-m02)   <name>ha-158602-m02</name>
	I0827 22:22:51.521915   29384 main.go:141] libmachine: (ha-158602-m02)   <memory unit='MiB'>2200</memory>
	I0827 22:22:51.521923   29384 main.go:141] libmachine: (ha-158602-m02)   <vcpu>2</vcpu>
	I0827 22:22:51.521930   29384 main.go:141] libmachine: (ha-158602-m02)   <features>
	I0827 22:22:51.521942   29384 main.go:141] libmachine: (ha-158602-m02)     <acpi/>
	I0827 22:22:51.521949   29384 main.go:141] libmachine: (ha-158602-m02)     <apic/>
	I0827 22:22:51.521955   29384 main.go:141] libmachine: (ha-158602-m02)     <pae/>
	I0827 22:22:51.521961   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.521969   29384 main.go:141] libmachine: (ha-158602-m02)   </features>
	I0827 22:22:51.521979   29384 main.go:141] libmachine: (ha-158602-m02)   <cpu mode='host-passthrough'>
	I0827 22:22:51.522005   29384 main.go:141] libmachine: (ha-158602-m02)   
	I0827 22:22:51.522024   29384 main.go:141] libmachine: (ha-158602-m02)   </cpu>
	I0827 22:22:51.522037   29384 main.go:141] libmachine: (ha-158602-m02)   <os>
	I0827 22:22:51.522044   29384 main.go:141] libmachine: (ha-158602-m02)     <type>hvm</type>
	I0827 22:22:51.522054   29384 main.go:141] libmachine: (ha-158602-m02)     <boot dev='cdrom'/>
	I0827 22:22:51.522061   29384 main.go:141] libmachine: (ha-158602-m02)     <boot dev='hd'/>
	I0827 22:22:51.522076   29384 main.go:141] libmachine: (ha-158602-m02)     <bootmenu enable='no'/>
	I0827 22:22:51.522086   29384 main.go:141] libmachine: (ha-158602-m02)   </os>
	I0827 22:22:51.522106   29384 main.go:141] libmachine: (ha-158602-m02)   <devices>
	I0827 22:22:51.522132   29384 main.go:141] libmachine: (ha-158602-m02)     <disk type='file' device='cdrom'>
	I0827 22:22:51.522149   29384 main.go:141] libmachine: (ha-158602-m02)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/boot2docker.iso'/>
	I0827 22:22:51.522160   29384 main.go:141] libmachine: (ha-158602-m02)       <target dev='hdc' bus='scsi'/>
	I0827 22:22:51.522172   29384 main.go:141] libmachine: (ha-158602-m02)       <readonly/>
	I0827 22:22:51.522180   29384 main.go:141] libmachine: (ha-158602-m02)     </disk>
	I0827 22:22:51.522193   29384 main.go:141] libmachine: (ha-158602-m02)     <disk type='file' device='disk'>
	I0827 22:22:51.522207   29384 main.go:141] libmachine: (ha-158602-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:22:51.522226   29384 main.go:141] libmachine: (ha-158602-m02)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/ha-158602-m02.rawdisk'/>
	I0827 22:22:51.522238   29384 main.go:141] libmachine: (ha-158602-m02)       <target dev='hda' bus='virtio'/>
	I0827 22:22:51.522250   29384 main.go:141] libmachine: (ha-158602-m02)     </disk>
	I0827 22:22:51.522260   29384 main.go:141] libmachine: (ha-158602-m02)     <interface type='network'>
	I0827 22:22:51.522283   29384 main.go:141] libmachine: (ha-158602-m02)       <source network='mk-ha-158602'/>
	I0827 22:22:51.522300   29384 main.go:141] libmachine: (ha-158602-m02)       <model type='virtio'/>
	I0827 22:22:51.522312   29384 main.go:141] libmachine: (ha-158602-m02)     </interface>
	I0827 22:22:51.522322   29384 main.go:141] libmachine: (ha-158602-m02)     <interface type='network'>
	I0827 22:22:51.522332   29384 main.go:141] libmachine: (ha-158602-m02)       <source network='default'/>
	I0827 22:22:51.522338   29384 main.go:141] libmachine: (ha-158602-m02)       <model type='virtio'/>
	I0827 22:22:51.522345   29384 main.go:141] libmachine: (ha-158602-m02)     </interface>
	I0827 22:22:51.522352   29384 main.go:141] libmachine: (ha-158602-m02)     <serial type='pty'>
	I0827 22:22:51.522381   29384 main.go:141] libmachine: (ha-158602-m02)       <target port='0'/>
	I0827 22:22:51.522396   29384 main.go:141] libmachine: (ha-158602-m02)     </serial>
	I0827 22:22:51.522405   29384 main.go:141] libmachine: (ha-158602-m02)     <console type='pty'>
	I0827 22:22:51.522413   29384 main.go:141] libmachine: (ha-158602-m02)       <target type='serial' port='0'/>
	I0827 22:22:51.522424   29384 main.go:141] libmachine: (ha-158602-m02)     </console>
	I0827 22:22:51.522434   29384 main.go:141] libmachine: (ha-158602-m02)     <rng model='virtio'>
	I0827 22:22:51.522447   29384 main.go:141] libmachine: (ha-158602-m02)       <backend model='random'>/dev/random</backend>
	I0827 22:22:51.522457   29384 main.go:141] libmachine: (ha-158602-m02)     </rng>
	I0827 22:22:51.522465   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.522478   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.522506   29384 main.go:141] libmachine: (ha-158602-m02)   </devices>
	I0827 22:22:51.522528   29384 main.go:141] libmachine: (ha-158602-m02) </domain>
	I0827 22:22:51.522542   29384 main.go:141] libmachine: (ha-158602-m02) 
	I0827 22:22:51.529093   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:c8:11:5e in network default
	I0827 22:22:51.529610   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring networks are active...
	I0827 22:22:51.529633   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:51.530655   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring network default is active
	I0827 22:22:51.531006   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring network mk-ha-158602 is active
	I0827 22:22:51.531405   29384 main.go:141] libmachine: (ha-158602-m02) Getting domain xml...
	I0827 22:22:51.532192   29384 main.go:141] libmachine: (ha-158602-m02) Creating domain...
	I0827 22:22:52.755344   29384 main.go:141] libmachine: (ha-158602-m02) Waiting to get IP...
	I0827 22:22:52.756055   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:52.756425   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:52.756455   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:52.756415   29745 retry.go:31] will retry after 194.568413ms: waiting for machine to come up
	I0827 22:22:52.953024   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:52.953407   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:52.953434   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:52.953394   29745 retry.go:31] will retry after 325.007706ms: waiting for machine to come up
	I0827 22:22:53.280017   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:53.280646   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:53.280695   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:53.280532   29745 retry.go:31] will retry after 326.358818ms: waiting for machine to come up
	I0827 22:22:53.608162   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:53.608635   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:53.608661   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:53.608597   29745 retry.go:31] will retry after 573.876873ms: waiting for machine to come up
	I0827 22:22:54.184341   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:54.184903   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:54.184933   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:54.184861   29745 retry.go:31] will retry after 467.432481ms: waiting for machine to come up
	I0827 22:22:54.653558   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:54.653987   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:54.654003   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:54.653939   29745 retry.go:31] will retry after 932.113121ms: waiting for machine to come up
	I0827 22:22:55.588071   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:55.588548   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:55.588570   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:55.588507   29745 retry.go:31] will retry after 1.106053983s: waiting for machine to come up
	I0827 22:22:56.695946   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:56.696501   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:56.696527   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:56.696449   29745 retry.go:31] will retry after 1.180147184s: waiting for machine to come up
	I0827 22:22:57.877879   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:57.878219   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:57.878246   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:57.878186   29745 retry.go:31] will retry after 1.604135095s: waiting for machine to come up
	I0827 22:22:59.483523   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:59.484044   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:59.484070   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:59.483980   29745 retry.go:31] will retry after 2.081579241s: waiting for machine to come up
	I0827 22:23:01.567515   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:01.568007   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:01.568035   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:01.567958   29745 retry.go:31] will retry after 2.372701308s: waiting for machine to come up
	I0827 22:23:03.942705   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:03.943068   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:03.943090   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:03.943047   29745 retry.go:31] will retry after 3.144488032s: waiting for machine to come up
	I0827 22:23:07.088992   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:07.089281   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:07.089305   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:07.089253   29745 retry.go:31] will retry after 4.261022366s: waiting for machine to come up
	I0827 22:23:11.352145   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.352500   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has current primary IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.352526   29384 main.go:141] libmachine: (ha-158602-m02) Found IP for machine: 192.168.39.142
	I0827 22:23:11.352541   29384 main.go:141] libmachine: (ha-158602-m02) Reserving static IP address...
	I0827 22:23:11.352864   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find host DHCP lease matching {name: "ha-158602-m02", mac: "52:54:00:fa:7e:06", ip: "192.168.39.142"} in network mk-ha-158602
	I0827 22:23:11.426293   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Getting to WaitForSSH function...
	I0827 22:23:11.426351   29384 main.go:141] libmachine: (ha-158602-m02) Reserved static IP address: 192.168.39.142
	I0827 22:23:11.426366   29384 main.go:141] libmachine: (ha-158602-m02) Waiting for SSH to be available...
	I0827 22:23:11.429192   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.429602   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602
	I0827 22:23:11.429645   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find defined IP address of network mk-ha-158602 interface with MAC address 52:54:00:fa:7e:06
	I0827 22:23:11.429800   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH client type: external
	I0827 22:23:11.429825   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa (-rw-------)
	I0827 22:23:11.429892   29384 main.go:141] libmachine: (ha-158602-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:23:11.429925   29384 main.go:141] libmachine: (ha-158602-m02) DBG | About to run SSH command:
	I0827 22:23:11.429971   29384 main.go:141] libmachine: (ha-158602-m02) DBG | exit 0
	I0827 22:23:11.433467   29384 main.go:141] libmachine: (ha-158602-m02) DBG | SSH cmd err, output: exit status 255: 
	I0827 22:23:11.433491   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0827 22:23:11.433501   29384 main.go:141] libmachine: (ha-158602-m02) DBG | command : exit 0
	I0827 22:23:11.433509   29384 main.go:141] libmachine: (ha-158602-m02) DBG | err     : exit status 255
	I0827 22:23:11.433525   29384 main.go:141] libmachine: (ha-158602-m02) DBG | output  : 
	I0827 22:23:14.435633   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Getting to WaitForSSH function...
	I0827 22:23:14.438942   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.439399   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.439427   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.439591   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH client type: external
	I0827 22:23:14.439616   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa (-rw-------)
	I0827 22:23:14.439649   29384 main.go:141] libmachine: (ha-158602-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:23:14.439666   29384 main.go:141] libmachine: (ha-158602-m02) DBG | About to run SSH command:
	I0827 22:23:14.439683   29384 main.go:141] libmachine: (ha-158602-m02) DBG | exit 0
	I0827 22:23:14.560627   29384 main.go:141] libmachine: (ha-158602-m02) DBG | SSH cmd err, output: <nil>: 
	I0827 22:23:14.560871   29384 main.go:141] libmachine: (ha-158602-m02) KVM machine creation complete!
	I0827 22:23:14.561354   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:23:14.561929   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:14.562155   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:14.562361   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:23:14.562389   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:23:14.563859   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:23:14.563876   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:23:14.563886   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:23:14.563895   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.566614   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.566954   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.566976   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.567129   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.567287   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.567453   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.567603   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.567797   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.568056   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.568072   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:23:14.663565   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:23:14.663591   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:23:14.663599   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.666428   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.666794   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.666822   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.667033   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.667228   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.667397   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.667529   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.667677   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.667908   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.667920   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:23:14.764898   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:23:14.764966   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:23:14.764973   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:23:14.764994   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.765210   29384 buildroot.go:166] provisioning hostname "ha-158602-m02"
	I0827 22:23:14.765234   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.765378   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.767952   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.768354   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.768380   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.768574   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.768775   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.768928   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.769043   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.769178   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.769380   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.769400   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602-m02 && echo "ha-158602-m02" | sudo tee /etc/hostname
	I0827 22:23:14.876662   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602-m02
	
	I0827 22:23:14.876693   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.879304   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.879683   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.879717   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.879856   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.880131   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.880325   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.880475   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.880658   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.880814   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.880829   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:23:14.985181   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:23:14.985208   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:23:14.985226   29384 buildroot.go:174] setting up certificates
	I0827 22:23:14.985238   29384 provision.go:84] configureAuth start
	I0827 22:23:14.985249   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.985577   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:14.988233   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.988621   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.988654   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.988772   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.990837   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.991103   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.991133   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.991273   29384 provision.go:143] copyHostCerts
	I0827 22:23:14.991305   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:23:14.991344   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:23:14.991356   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:23:14.991437   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:23:14.991508   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:23:14.991525   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:23:14.991531   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:23:14.991555   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:23:14.991600   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:23:14.991617   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:23:14.991623   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:23:14.991645   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:23:14.991703   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602-m02 san=[127.0.0.1 192.168.39.142 ha-158602-m02 localhost minikube]
	I0827 22:23:15.100282   29384 provision.go:177] copyRemoteCerts
	I0827 22:23:15.100347   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:23:15.100370   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.102865   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.103160   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.103183   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.103346   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.103548   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.103673   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.103780   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.182993   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:23:15.183062   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 22:23:15.205343   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:23:15.205413   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:23:15.228193   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:23:15.228275   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:23:15.250829   29384 provision.go:87] duration metric: took 265.58072ms to configureAuth
	I0827 22:23:15.250855   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:23:15.251072   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:15.251145   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.253917   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.254355   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.254376   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.254553   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.254724   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.254873   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.255009   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.255202   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:15.255362   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:15.255375   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:23:15.465560   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:23:15.465592   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:23:15.465603   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetURL
	I0827 22:23:15.466932   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using libvirt version 6000000
	I0827 22:23:15.469084   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.469410   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.469442   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.469554   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:23:15.469568   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:23:15.469595   29384 client.go:171] duration metric: took 24.464104776s to LocalClient.Create
	I0827 22:23:15.469625   29384 start.go:167] duration metric: took 24.464170956s to libmachine.API.Create "ha-158602"
	I0827 22:23:15.469636   29384 start.go:293] postStartSetup for "ha-158602-m02" (driver="kvm2")
	I0827 22:23:15.469650   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:23:15.469672   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.469959   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:23:15.469982   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.472126   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.472495   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.472524   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.472652   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.472852   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.473029   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.473181   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.550537   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:23:15.554365   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:23:15.554393   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:23:15.554452   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:23:15.554542   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:23:15.554556   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:23:15.554658   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:23:15.563879   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:23:15.585230   29384 start.go:296] duration metric: took 115.581036ms for postStartSetup
	I0827 22:23:15.585280   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:23:15.585854   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:15.588435   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.588827   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.588847   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.589102   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:23:15.589314   29384 start.go:128] duration metric: took 24.602284134s to createHost
	I0827 22:23:15.589340   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.591310   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.591632   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.591660   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.591800   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.591938   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.592085   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.592174   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.592317   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:15.592544   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:15.592559   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:23:15.688858   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797395.666815643
	
	I0827 22:23:15.688881   29384 fix.go:216] guest clock: 1724797395.666815643
	I0827 22:23:15.688891   29384 fix.go:229] Guest: 2024-08-27 22:23:15.666815643 +0000 UTC Remote: 2024-08-27 22:23:15.589326478 +0000 UTC m=+69.897500846 (delta=77.489165ms)
	I0827 22:23:15.688909   29384 fix.go:200] guest clock delta is within tolerance: 77.489165ms
	I0827 22:23:15.688917   29384 start.go:83] releasing machines lock for "ha-158602-m02", held for 24.702011455s
	I0827 22:23:15.688941   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.689186   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:15.691448   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.691761   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.691786   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.694101   29384 out.go:177] * Found network options:
	I0827 22:23:15.695206   29384 out.go:177]   - NO_PROXY=192.168.39.77
	W0827 22:23:15.696336   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:23:15.696377   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.696887   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.697052   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.697128   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:23:15.697169   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	W0827 22:23:15.697224   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:23:15.697276   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:23:15.697292   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.699753   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700017   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700121   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.700147   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700313   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.700413   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.700436   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700508   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.700672   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.700694   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.700849   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.700864   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.701000   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.701144   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.932499   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:23:15.938103   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:23:15.938181   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:23:15.959322   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:23:15.959350   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:23:15.959407   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:23:15.978390   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:23:15.993171   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:23:15.993225   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:23:16.006779   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:23:16.020812   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:23:16.147380   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:23:16.319051   29384 docker.go:233] disabling docker service ...
	I0827 22:23:16.319135   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:23:16.332705   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:23:16.344782   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:23:16.462518   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:23:16.575127   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:23:16.589418   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:23:16.606616   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:23:16.606677   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.616833   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:23:16.616896   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.627069   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.636890   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.646636   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:23:16.656720   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.666297   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.682159   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.692011   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:23:16.700996   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:23:16.701067   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:23:16.714552   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:23:16.724642   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:16.830976   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:23:16.915581   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:23:16.915651   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:23:16.919989   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:23:16.920047   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:23:16.923656   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:23:16.960529   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:23:16.960621   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:23:16.986797   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:23:17.015475   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:23:17.016779   29384 out.go:177]   - env NO_PROXY=192.168.39.77
	I0827 22:23:17.018063   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:17.020773   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:17.021153   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:17.021190   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:17.021416   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:23:17.025661   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:23:17.038035   29384 mustload.go:65] Loading cluster: ha-158602
	I0827 22:23:17.038200   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:17.038554   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:17.038580   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:17.053097   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0827 22:23:17.053473   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:17.053904   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:17.053924   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:17.054181   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:17.054376   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:23:17.056042   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:23:17.056327   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:17.056368   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:17.070703   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0827 22:23:17.071108   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:17.071593   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:17.071613   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:17.071879   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:17.072061   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:23:17.072269   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.142
	I0827 22:23:17.072285   29384 certs.go:194] generating shared ca certs ...
	I0827 22:23:17.072303   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.072432   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:23:17.072504   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:23:17.072519   29384 certs.go:256] generating profile certs ...
	I0827 22:23:17.072604   29384 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:23:17.072627   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267
	I0827 22:23:17.072639   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.254]
	I0827 22:23:17.116741   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 ...
	I0827 22:23:17.116768   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267: {Name:mk70b4f114965c8b6603d6433cb7a61c1c7912e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.116927   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267 ...
	I0827 22:23:17.116940   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267: {Name:mk8147ed32f4bc89d4feb83d8cd3d9f45e7b461e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.117024   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:23:17.117148   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:23:17.117272   29384 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:23:17.117285   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:23:17.117298   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:23:17.117318   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:23:17.117331   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:23:17.117343   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:23:17.117354   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:23:17.117364   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:23:17.117375   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:23:17.117421   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:23:17.117447   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:23:17.117456   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:23:17.117475   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:23:17.117496   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:23:17.117519   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:23:17.117555   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:23:17.117589   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.117603   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.117615   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.117642   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:23:17.120527   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:17.120915   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:23:17.120943   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:17.121066   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:23:17.121238   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:23:17.121367   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:23:17.121488   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:23:17.196819   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0827 22:23:17.201071   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0827 22:23:17.211087   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0827 22:23:17.215455   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0827 22:23:17.225740   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0827 22:23:17.229475   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0827 22:23:17.239004   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0827 22:23:17.242794   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0827 22:23:17.252194   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0827 22:23:17.255806   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0827 22:23:17.264992   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0827 22:23:17.268820   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0827 22:23:17.278569   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:23:17.301784   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:23:17.324240   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:23:17.346025   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:23:17.367550   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0827 22:23:17.389149   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:23:17.411062   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:23:17.432734   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:23:17.455367   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:23:17.477466   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:23:17.499572   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:23:17.521706   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0827 22:23:17.536474   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0827 22:23:17.551438   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0827 22:23:17.566840   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0827 22:23:17.582029   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0827 22:23:17.597562   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0827 22:23:17.612284   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0827 22:23:17.627253   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:23:17.632437   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:23:17.642395   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.646396   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.646433   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.651638   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:23:17.661370   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:23:17.671124   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.675273   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.675318   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.680489   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:23:17.690088   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:23:17.699733   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.703738   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.703778   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.708689   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:23:17.718392   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:23:17.721896   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:23:17.721954   29384 kubeadm.go:934] updating node {m02 192.168.39.142 8443 v1.31.0 crio true true} ...
	I0827 22:23:17.722032   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:23:17.722057   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:23:17.722083   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:23:17.737084   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:23:17.737154   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:23:17.737208   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:23:17.746337   29384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0827 22:23:17.746386   29384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0827 22:23:17.754816   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0827 22:23:17.754838   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:23:17.754847   29384 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0827 22:23:17.754889   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:23:17.754815   29384 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0827 22:23:17.759972   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0827 22:23:17.760005   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0827 22:23:18.689698   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:23:18.689792   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:23:18.695364   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0827 22:23:18.695401   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0827 22:23:18.858280   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:23:18.889059   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:23:18.889171   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:23:18.901142   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0827 22:23:18.901176   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0827 22:23:19.228635   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0827 22:23:19.238415   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0827 22:23:19.254976   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:23:19.270796   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:23:19.286360   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:23:19.290233   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:23:19.302822   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:19.418817   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:23:19.436857   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:23:19.437265   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:19.437314   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:19.452544   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0827 22:23:19.453031   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:19.453525   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:19.453544   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:19.453889   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:19.454107   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:23:19.454258   29384 start.go:317] joinCluster: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:23:19.454350   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0827 22:23:19.454370   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:23:19.457214   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:19.457649   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:23:19.457674   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:19.457830   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:23:19.457988   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:23:19.458132   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:23:19.458273   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:23:19.597839   29384 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:23:19.597880   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ucr0iw.a616mktqyqppgnwr --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m02 --control-plane --apiserver-advertise-address=192.168.39.142 --apiserver-bind-port=8443"
	I0827 22:23:41.399875   29384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ucr0iw.a616mktqyqppgnwr --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m02 --control-plane --apiserver-advertise-address=192.168.39.142 --apiserver-bind-port=8443": (21.801972228s)
	I0827 22:23:41.399915   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0827 22:23:41.847756   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602-m02 minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=false
	I0827 22:23:41.970431   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-158602-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0827 22:23:42.092283   29384 start.go:319] duration metric: took 22.63801931s to joinCluster
	I0827 22:23:42.092371   29384 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:23:42.092716   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:42.093923   29384 out.go:177] * Verifying Kubernetes components...
	I0827 22:23:42.095489   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:42.337315   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:23:42.360051   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:23:42.360395   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0827 22:23:42.360509   29384 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.77:8443
	I0827 22:23:42.360816   29384 node_ready.go:35] waiting up to 6m0s for node "ha-158602-m02" to be "Ready" ...
	I0827 22:23:42.360931   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:42.360943   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:42.360954   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:42.360965   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:42.371816   29384 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0827 22:23:42.861719   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:42.861739   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:42.861751   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:42.861756   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:42.867443   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:23:43.361465   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:43.361489   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:43.361500   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:43.361506   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:43.368142   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:23:43.861711   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:43.861737   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:43.861748   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:43.861755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:43.864816   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:44.361761   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:44.361782   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:44.361788   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:44.361793   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:44.365264   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:44.365782   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:44.861642   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:44.861669   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:44.861681   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:44.861687   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:44.864853   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:45.361722   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:45.361743   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:45.361751   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:45.361755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:45.365102   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:45.861804   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:45.861832   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:45.861843   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:45.861849   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:45.865089   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.361335   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:46.361361   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:46.361371   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:46.361377   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:46.364754   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.861229   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:46.861250   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:46.861258   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:46.861263   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:46.864782   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.865391   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:47.361745   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:47.361770   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:47.361782   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:47.361790   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:47.364768   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:47.861755   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:47.861781   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:47.861788   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:47.861793   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:47.864844   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.361704   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:48.361724   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:48.361732   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:48.361735   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:48.364864   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.861716   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:48.861753   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:48.861765   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:48.861772   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:48.864993   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.865688   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:49.361696   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:49.361714   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:49.361722   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:49.361727   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:49.364009   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:49.861323   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:49.861371   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:49.861383   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:49.861390   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:49.864399   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:50.361738   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:50.361763   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:50.361780   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:50.361785   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:50.372425   29384 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0827 22:23:50.861692   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:50.861712   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:50.861719   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:50.861724   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:50.864315   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:51.361563   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:51.361588   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:51.361601   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:51.361606   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:51.364710   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:51.365212   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:51.861601   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:51.861628   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:51.861639   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:51.861644   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:51.864745   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:52.361691   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:52.361716   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:52.361727   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:52.361733   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:52.364864   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:52.861694   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:52.861716   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:52.861727   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:52.861732   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:52.865072   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:53.361245   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:53.361268   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:53.361279   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:53.361284   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:53.364123   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:53.862011   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:53.862037   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:53.862048   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:53.862054   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:53.866913   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:23:53.867501   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:54.361676   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:54.361701   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:54.361709   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:54.361713   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:54.364743   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:54.861208   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:54.861230   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:54.861239   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:54.861243   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:54.863841   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:55.361750   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:55.361781   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:55.361793   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:55.361798   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:55.368246   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:23:55.861192   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:55.861219   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:55.861235   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:55.861240   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:55.864110   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:56.361561   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:56.361580   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:56.361600   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:56.361606   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:56.364724   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:56.365228   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:56.861717   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:56.861741   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:56.861749   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:56.861753   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:56.865067   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:57.361760   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:57.361786   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:57.361798   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:57.361804   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:57.364673   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:57.861733   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:57.861756   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:57.861767   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:57.861777   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:57.864796   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:58.361725   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:58.361746   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:58.361754   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:58.361758   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:58.365625   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:58.366190   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:58.861364   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:58.861386   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:58.861394   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:58.861398   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:58.864292   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:59.362002   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:59.362027   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:59.362036   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:59.362041   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:59.365024   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:59.861335   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:59.861369   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:59.861378   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:59.861382   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:59.864212   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.361420   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.361446   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.361455   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.361459   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.364515   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.860974   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.861002   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.861013   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.861019   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.864222   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.864966   29384 node_ready.go:49] node "ha-158602-m02" has status "Ready":"True"
	I0827 22:24:00.864982   29384 node_ready.go:38] duration metric: took 18.504142957s for node "ha-158602-m02" to be "Ready" ...
	I0827 22:24:00.864991   29384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:24:00.865070   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:00.865081   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.865088   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.865094   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.869052   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.874795   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.874865   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxzgs
	I0827 22:24:00.874871   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.874878   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.874882   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.877799   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.878375   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.878389   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.878397   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.878401   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.880710   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.881163   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.881179   29384 pod_ready.go:82] duration metric: took 6.360916ms for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.881188   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.881233   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x6dcd
	I0827 22:24:00.881240   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.881247   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.881252   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.883599   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.884224   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.884237   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.884244   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.884248   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.886706   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.887223   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.887239   29384 pod_ready.go:82] duration metric: took 6.045435ms for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.887247   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.887325   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602
	I0827 22:24:00.887335   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.887342   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.887359   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.889398   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.890037   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.890052   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.890060   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.890063   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.892148   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.892760   29384 pod_ready.go:93] pod "etcd-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.892784   29384 pod_ready.go:82] duration metric: took 5.530261ms for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.892796   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.892842   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m02
	I0827 22:24:00.892850   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.892857   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.892860   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.895124   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.895601   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.895621   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.895629   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.895635   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.897675   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.898231   29384 pod_ready.go:93] pod "etcd-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.898248   29384 pod_ready.go:82] duration metric: took 5.445558ms for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.898261   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.061746   29384 request.go:632] Waited for 163.434873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:24:01.061822   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:24:01.061831   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.061846   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.061852   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.065188   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.261601   29384 request.go:632] Waited for 195.377899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:01.261653   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:01.261658   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.261666   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.261671   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.264407   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:01.265048   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:01.265068   29384 pod_ready.go:82] duration metric: took 366.801663ms for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.265078   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.460991   29384 request.go:632] Waited for 195.852895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:24:01.461056   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:24:01.461061   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.461068   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.461072   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.464405   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.661661   29384 request.go:632] Waited for 196.322387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:01.661722   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:01.661735   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.661755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.661778   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.665536   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.666159   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:01.666177   29384 pod_ready.go:82] duration metric: took 401.092427ms for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.666189   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.861306   29384 request.go:632] Waited for 195.042639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:24:01.861414   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:24:01.861427   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.861437   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.861445   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.864421   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.061461   29384 request.go:632] Waited for 196.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.061514   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.061520   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.061530   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.061545   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.064495   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.064970   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.064988   29384 pod_ready.go:82] duration metric: took 398.791787ms for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.064997   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.261529   29384 request.go:632] Waited for 196.463267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:24:02.261583   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:24:02.261590   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.261600   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.261605   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.264684   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:02.461834   29384 request.go:632] Waited for 196.352983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:02.461899   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:02.461904   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.461912   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.461915   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.465015   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:02.465502   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.465520   29384 pod_ready.go:82] duration metric: took 400.516744ms for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.465532   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.661656   29384 request.go:632] Waited for 196.035045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:24:02.661715   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:24:02.661720   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.661728   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.661733   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.666595   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:02.861627   29384 request.go:632] Waited for 194.390829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.861684   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.861689   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.861698   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.861703   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.864690   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.865351   29384 pod_ready.go:93] pod "kube-proxy-5pmrv" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.865376   29384 pod_ready.go:82] duration metric: took 399.833719ms for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.865385   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.061431   29384 request.go:632] Waited for 195.967993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:24:03.061492   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:24:03.061499   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.061510   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.061520   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.064456   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:03.261507   29384 request.go:632] Waited for 196.385048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:03.261571   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:03.261578   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.261589   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.261595   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.264613   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:03.265201   29384 pod_ready.go:93] pod "kube-proxy-slgmm" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:03.265220   29384 pod_ready.go:82] duration metric: took 399.828388ms for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.265232   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.461417   29384 request.go:632] Waited for 196.094406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:24:03.461481   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:24:03.461489   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.461499   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.461506   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.466110   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:03.661058   29384 request.go:632] Waited for 194.303204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:03.661142   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:03.661152   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.661159   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.661164   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.664494   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:03.665204   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:03.665222   29384 pod_ready.go:82] duration metric: took 399.982907ms for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.665231   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.861342   29384 request.go:632] Waited for 196.034031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:24:03.861402   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:24:03.861407   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.861416   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.861420   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.864317   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:04.061128   29384 request.go:632] Waited for 196.306564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:04.061209   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:04.061215   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.061223   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.061227   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.064333   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.064792   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:04.064811   29384 pod_ready.go:82] duration metric: took 399.574125ms for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:04.064821   29384 pod_ready.go:39] duration metric: took 3.199819334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:24:04.064837   29384 api_server.go:52] waiting for apiserver process to appear ...
	I0827 22:24:04.064892   29384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:24:04.081133   29384 api_server.go:72] duration metric: took 21.988731021s to wait for apiserver process to appear ...
	I0827 22:24:04.081153   29384 api_server.go:88] waiting for apiserver healthz status ...
	I0827 22:24:04.081181   29384 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0827 22:24:04.085562   29384 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0827 22:24:04.085666   29384 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I0827 22:24:04.085676   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.085683   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.085688   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.086542   29384 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0827 22:24:04.086702   29384 api_server.go:141] control plane version: v1.31.0
	I0827 22:24:04.086720   29384 api_server.go:131] duration metric: took 5.560987ms to wait for apiserver health ...
	I0827 22:24:04.086730   29384 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 22:24:04.261058   29384 request.go:632] Waited for 174.261561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.261147   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.261156   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.261168   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.261179   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.265764   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:04.271274   29384 system_pods.go:59] 17 kube-system pods found
	I0827 22:24:04.271301   29384 system_pods.go:61] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:24:04.271306   29384 system_pods.go:61] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:24:04.271310   29384 system_pods.go:61] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:24:04.271313   29384 system_pods.go:61] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:24:04.271319   29384 system_pods.go:61] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:24:04.271323   29384 system_pods.go:61] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:24:04.271329   29384 system_pods.go:61] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:24:04.271334   29384 system_pods.go:61] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:24:04.271339   29384 system_pods.go:61] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:24:04.271344   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:24:04.271351   29384 system_pods.go:61] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:24:04.271356   29384 system_pods.go:61] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:24:04.271363   29384 system_pods.go:61] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:24:04.271366   29384 system_pods.go:61] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:24:04.271369   29384 system_pods.go:61] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:24:04.271372   29384 system_pods.go:61] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:24:04.271375   29384 system_pods.go:61] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:24:04.271383   29384 system_pods.go:74] duration metric: took 184.647807ms to wait for pod list to return data ...
	I0827 22:24:04.271393   29384 default_sa.go:34] waiting for default service account to be created ...
	I0827 22:24:04.461890   29384 request.go:632] Waited for 190.422827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:24:04.461984   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:24:04.461999   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.462010   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.462016   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.465756   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.465952   29384 default_sa.go:45] found service account: "default"
	I0827 22:24:04.465967   29384 default_sa.go:55] duration metric: took 194.566523ms for default service account to be created ...
	I0827 22:24:04.465974   29384 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 22:24:04.661389   29384 request.go:632] Waited for 195.3503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.661453   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.661458   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.661466   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.661472   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.666509   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:24:04.672801   29384 system_pods.go:86] 17 kube-system pods found
	I0827 22:24:04.672827   29384 system_pods.go:89] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:24:04.672832   29384 system_pods.go:89] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:24:04.672836   29384 system_pods.go:89] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:24:04.672840   29384 system_pods.go:89] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:24:04.672844   29384 system_pods.go:89] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:24:04.672847   29384 system_pods.go:89] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:24:04.672850   29384 system_pods.go:89] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:24:04.672855   29384 system_pods.go:89] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:24:04.672858   29384 system_pods.go:89] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:24:04.672862   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:24:04.672865   29384 system_pods.go:89] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:24:04.672869   29384 system_pods.go:89] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:24:04.672875   29384 system_pods.go:89] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:24:04.672878   29384 system_pods.go:89] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:24:04.672884   29384 system_pods.go:89] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:24:04.672888   29384 system_pods.go:89] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:24:04.672892   29384 system_pods.go:89] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:24:04.672898   29384 system_pods.go:126] duration metric: took 206.919567ms to wait for k8s-apps to be running ...
	I0827 22:24:04.672907   29384 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 22:24:04.672949   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:24:04.688955   29384 system_svc.go:56] duration metric: took 16.039406ms WaitForService to wait for kubelet
	I0827 22:24:04.688987   29384 kubeadm.go:582] duration metric: took 22.596587501s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:24:04.689004   29384 node_conditions.go:102] verifying NodePressure condition ...
	I0827 22:24:04.861434   29384 request.go:632] Waited for 172.327417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I0827 22:24:04.861483   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I0827 22:24:04.861488   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.861496   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.861500   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.864922   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.865734   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:24:04.865757   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:24:04.865769   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:24:04.865772   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:24:04.865776   29384 node_conditions.go:105] duration metric: took 176.767658ms to run NodePressure ...
	I0827 22:24:04.865787   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:24:04.865809   29384 start.go:255] writing updated cluster config ...
	I0827 22:24:04.867803   29384 out.go:201] 
	I0827 22:24:04.869186   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:04.869273   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:04.870872   29384 out.go:177] * Starting "ha-158602-m03" control-plane node in "ha-158602" cluster
	I0827 22:24:04.872079   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:24:04.872097   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:24:04.872187   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:24:04.872199   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:24:04.872282   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:04.872436   29384 start.go:360] acquireMachinesLock for ha-158602-m03: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:24:04.872503   29384 start.go:364] duration metric: took 46.449µs to acquireMachinesLock for "ha-158602-m03"
	I0827 22:24:04.872524   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:04.872619   29384 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0827 22:24:04.873955   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:24:04.874037   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:04.874072   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:04.889205   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0827 22:24:04.889637   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:04.890081   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:04.890104   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:04.890428   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:04.890668   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:04.890812   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:04.890978   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:24:04.891007   29384 client.go:168] LocalClient.Create starting
	I0827 22:24:04.891037   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:24:04.891069   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:24:04.891083   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:24:04.891130   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:24:04.891148   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:24:04.891161   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:24:04.891182   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:24:04.891190   29384 main.go:141] libmachine: (ha-158602-m03) Calling .PreCreateCheck
	I0827 22:24:04.891345   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:04.891744   29384 main.go:141] libmachine: Creating machine...
	I0827 22:24:04.891758   29384 main.go:141] libmachine: (ha-158602-m03) Calling .Create
	I0827 22:24:04.891912   29384 main.go:141] libmachine: (ha-158602-m03) Creating KVM machine...
	I0827 22:24:04.893080   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found existing default KVM network
	I0827 22:24:04.893222   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found existing private KVM network mk-ha-158602
	I0827 22:24:04.893349   29384 main.go:141] libmachine: (ha-158602-m03) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 ...
	I0827 22:24:04.893377   29384 main.go:141] libmachine: (ha-158602-m03) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:24:04.893425   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:04.893338   30149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:24:04.893519   29384 main.go:141] libmachine: (ha-158602-m03) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:24:05.125864   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.125741   30149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa...
	I0827 22:24:05.363185   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.363057   30149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/ha-158602-m03.rawdisk...
	I0827 22:24:05.363221   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Writing magic tar header
	I0827 22:24:05.363238   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Writing SSH key tar header
	I0827 22:24:05.363252   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.363166   30149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 ...
	I0827 22:24:05.363334   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03
	I0827 22:24:05.363373   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:24:05.363386   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 (perms=drwx------)
	I0827 22:24:05.363405   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:24:05.363418   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:24:05.363438   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:24:05.363459   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:24:05.363474   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:24:05.363500   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:24:05.363521   29384 main.go:141] libmachine: (ha-158602-m03) Creating domain...
	I0827 22:24:05.363537   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:24:05.363557   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:24:05.363571   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:24:05.363583   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home
	I0827 22:24:05.363593   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Skipping /home - not owner
	I0827 22:24:05.364565   29384 main.go:141] libmachine: (ha-158602-m03) define libvirt domain using xml: 
	I0827 22:24:05.364587   29384 main.go:141] libmachine: (ha-158602-m03) <domain type='kvm'>
	I0827 22:24:05.364598   29384 main.go:141] libmachine: (ha-158602-m03)   <name>ha-158602-m03</name>
	I0827 22:24:05.364609   29384 main.go:141] libmachine: (ha-158602-m03)   <memory unit='MiB'>2200</memory>
	I0827 22:24:05.364621   29384 main.go:141] libmachine: (ha-158602-m03)   <vcpu>2</vcpu>
	I0827 22:24:05.364632   29384 main.go:141] libmachine: (ha-158602-m03)   <features>
	I0827 22:24:05.364642   29384 main.go:141] libmachine: (ha-158602-m03)     <acpi/>
	I0827 22:24:05.364655   29384 main.go:141] libmachine: (ha-158602-m03)     <apic/>
	I0827 22:24:05.364666   29384 main.go:141] libmachine: (ha-158602-m03)     <pae/>
	I0827 22:24:05.364674   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.364685   29384 main.go:141] libmachine: (ha-158602-m03)   </features>
	I0827 22:24:05.364691   29384 main.go:141] libmachine: (ha-158602-m03)   <cpu mode='host-passthrough'>
	I0827 22:24:05.364699   29384 main.go:141] libmachine: (ha-158602-m03)   
	I0827 22:24:05.364704   29384 main.go:141] libmachine: (ha-158602-m03)   </cpu>
	I0827 22:24:05.364712   29384 main.go:141] libmachine: (ha-158602-m03)   <os>
	I0827 22:24:05.364720   29384 main.go:141] libmachine: (ha-158602-m03)     <type>hvm</type>
	I0827 22:24:05.364732   29384 main.go:141] libmachine: (ha-158602-m03)     <boot dev='cdrom'/>
	I0827 22:24:05.364745   29384 main.go:141] libmachine: (ha-158602-m03)     <boot dev='hd'/>
	I0827 22:24:05.364757   29384 main.go:141] libmachine: (ha-158602-m03)     <bootmenu enable='no'/>
	I0827 22:24:05.364767   29384 main.go:141] libmachine: (ha-158602-m03)   </os>
	I0827 22:24:05.364775   29384 main.go:141] libmachine: (ha-158602-m03)   <devices>
	I0827 22:24:05.364786   29384 main.go:141] libmachine: (ha-158602-m03)     <disk type='file' device='cdrom'>
	I0827 22:24:05.364801   29384 main.go:141] libmachine: (ha-158602-m03)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/boot2docker.iso'/>
	I0827 22:24:05.364815   29384 main.go:141] libmachine: (ha-158602-m03)       <target dev='hdc' bus='scsi'/>
	I0827 22:24:05.364832   29384 main.go:141] libmachine: (ha-158602-m03)       <readonly/>
	I0827 22:24:05.364846   29384 main.go:141] libmachine: (ha-158602-m03)     </disk>
	I0827 22:24:05.364859   29384 main.go:141] libmachine: (ha-158602-m03)     <disk type='file' device='disk'>
	I0827 22:24:05.364871   29384 main.go:141] libmachine: (ha-158602-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:24:05.364886   29384 main.go:141] libmachine: (ha-158602-m03)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/ha-158602-m03.rawdisk'/>
	I0827 22:24:05.364897   29384 main.go:141] libmachine: (ha-158602-m03)       <target dev='hda' bus='virtio'/>
	I0827 22:24:05.364907   29384 main.go:141] libmachine: (ha-158602-m03)     </disk>
	I0827 22:24:05.364918   29384 main.go:141] libmachine: (ha-158602-m03)     <interface type='network'>
	I0827 22:24:05.364930   29384 main.go:141] libmachine: (ha-158602-m03)       <source network='mk-ha-158602'/>
	I0827 22:24:05.364941   29384 main.go:141] libmachine: (ha-158602-m03)       <model type='virtio'/>
	I0827 22:24:05.364949   29384 main.go:141] libmachine: (ha-158602-m03)     </interface>
	I0827 22:24:05.364959   29384 main.go:141] libmachine: (ha-158602-m03)     <interface type='network'>
	I0827 22:24:05.364973   29384 main.go:141] libmachine: (ha-158602-m03)       <source network='default'/>
	I0827 22:24:05.364985   29384 main.go:141] libmachine: (ha-158602-m03)       <model type='virtio'/>
	I0827 22:24:05.364995   29384 main.go:141] libmachine: (ha-158602-m03)     </interface>
	I0827 22:24:05.365003   29384 main.go:141] libmachine: (ha-158602-m03)     <serial type='pty'>
	I0827 22:24:05.365014   29384 main.go:141] libmachine: (ha-158602-m03)       <target port='0'/>
	I0827 22:24:05.365025   29384 main.go:141] libmachine: (ha-158602-m03)     </serial>
	I0827 22:24:05.365036   29384 main.go:141] libmachine: (ha-158602-m03)     <console type='pty'>
	I0827 22:24:05.365044   29384 main.go:141] libmachine: (ha-158602-m03)       <target type='serial' port='0'/>
	I0827 22:24:05.365056   29384 main.go:141] libmachine: (ha-158602-m03)     </console>
	I0827 22:24:05.365066   29384 main.go:141] libmachine: (ha-158602-m03)     <rng model='virtio'>
	I0827 22:24:05.365078   29384 main.go:141] libmachine: (ha-158602-m03)       <backend model='random'>/dev/random</backend>
	I0827 22:24:05.365089   29384 main.go:141] libmachine: (ha-158602-m03)     </rng>
	I0827 22:24:05.365099   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.365107   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.365113   29384 main.go:141] libmachine: (ha-158602-m03)   </devices>
	I0827 22:24:05.365125   29384 main.go:141] libmachine: (ha-158602-m03) </domain>
	I0827 22:24:05.365131   29384 main.go:141] libmachine: (ha-158602-m03) 
	I0827 22:24:05.372087   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:3e:7a:6b in network default
	I0827 22:24:05.372733   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring networks are active...
	I0827 22:24:05.372756   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:05.373716   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring network default is active
	I0827 22:24:05.374012   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring network mk-ha-158602 is active
	I0827 22:24:05.374445   29384 main.go:141] libmachine: (ha-158602-m03) Getting domain xml...
	I0827 22:24:05.375267   29384 main.go:141] libmachine: (ha-158602-m03) Creating domain...
	I0827 22:24:06.609947   29384 main.go:141] libmachine: (ha-158602-m03) Waiting to get IP...
	I0827 22:24:06.610674   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:06.611152   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:06.611177   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:06.611113   30149 retry.go:31] will retry after 220.771743ms: waiting for machine to come up
	I0827 22:24:06.833726   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:06.834179   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:06.834206   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:06.834158   30149 retry.go:31] will retry after 323.861578ms: waiting for machine to come up
	I0827 22:24:07.159673   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.160206   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.160239   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.160149   30149 retry.go:31] will retry after 297.83033ms: waiting for machine to come up
	I0827 22:24:07.459728   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.460226   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.460249   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.460180   30149 retry.go:31] will retry after 438.110334ms: waiting for machine to come up
	I0827 22:24:07.899697   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.900092   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.900113   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.900051   30149 retry.go:31] will retry after 575.629093ms: waiting for machine to come up
	I0827 22:24:08.476870   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:08.477464   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:08.477496   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:08.477409   30149 retry.go:31] will retry after 621.866439ms: waiting for machine to come up
	I0827 22:24:09.101439   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:09.101895   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:09.101924   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:09.101836   30149 retry.go:31] will retry after 983.692714ms: waiting for machine to come up
	I0827 22:24:10.087444   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:10.087967   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:10.087999   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:10.087891   30149 retry.go:31] will retry after 983.631541ms: waiting for machine to come up
	I0827 22:24:11.072907   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:11.073346   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:11.073377   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:11.073309   30149 retry.go:31] will retry after 1.80000512s: waiting for machine to come up
	I0827 22:24:12.875166   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:12.875490   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:12.875522   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:12.875469   30149 retry.go:31] will retry after 2.085011068s: waiting for machine to come up
	I0827 22:24:14.962334   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:14.962817   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:14.962845   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:14.962781   30149 retry.go:31] will retry after 2.169328394s: waiting for machine to come up
	I0827 22:24:17.134398   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:17.134825   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:17.134851   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:17.134779   30149 retry.go:31] will retry after 2.479018152s: waiting for machine to come up
	I0827 22:24:19.616301   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:19.616679   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:19.616703   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:19.616636   30149 retry.go:31] will retry after 4.325988713s: waiting for machine to come up
	I0827 22:24:23.947128   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:23.947587   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:23.947608   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:23.947559   30149 retry.go:31] will retry after 4.889309517s: waiting for machine to come up
	I0827 22:24:28.841489   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.842062   29384 main.go:141] libmachine: (ha-158602-m03) Found IP for machine: 192.168.39.91
	I0827 22:24:28.842087   29384 main.go:141] libmachine: (ha-158602-m03) Reserving static IP address...
	I0827 22:24:28.842103   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has current primary IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.842468   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find host DHCP lease matching {name: "ha-158602-m03", mac: "52:54:00:5e:4d:2e", ip: "192.168.39.91"} in network mk-ha-158602
	I0827 22:24:28.916856   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Getting to WaitForSSH function...
	I0827 22:24:28.916884   29384 main.go:141] libmachine: (ha-158602-m03) Reserved static IP address: 192.168.39.91
	I0827 22:24:28.916897   29384 main.go:141] libmachine: (ha-158602-m03) Waiting for SSH to be available...
	I0827 22:24:28.919631   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.919985   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:28.920015   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.920155   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using SSH client type: external
	I0827 22:24:28.920185   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa (-rw-------)
	I0827 22:24:28.920225   29384 main.go:141] libmachine: (ha-158602-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:24:28.920247   29384 main.go:141] libmachine: (ha-158602-m03) DBG | About to run SSH command:
	I0827 22:24:28.920266   29384 main.go:141] libmachine: (ha-158602-m03) DBG | exit 0
	I0827 22:24:29.044609   29384 main.go:141] libmachine: (ha-158602-m03) DBG | SSH cmd err, output: <nil>: 
	I0827 22:24:29.044991   29384 main.go:141] libmachine: (ha-158602-m03) KVM machine creation complete!
	I0827 22:24:29.045244   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:29.045854   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:29.046062   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:29.046231   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:24:29.046248   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:24:29.047459   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:24:29.047474   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:24:29.047481   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:24:29.047489   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.049787   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.050279   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.050306   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.050594   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.050775   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.050916   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.051058   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.051188   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.051385   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.051399   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:24:29.151732   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:24:29.151754   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:24:29.151764   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.154524   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.154867   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.154902   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.155058   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.155232   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.155354   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.155468   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.155694   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.155885   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.155900   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:24:29.257207   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:24:29.257298   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:24:29.257313   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:24:29.257326   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.257573   29384 buildroot.go:166] provisioning hostname "ha-158602-m03"
	I0827 22:24:29.257599   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.257800   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.260826   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.261209   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.261236   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.261525   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.261742   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.261929   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.262053   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.262334   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.262556   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.262573   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602-m03 && echo "ha-158602-m03" | sudo tee /etc/hostname
	I0827 22:24:29.380133   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602-m03
	
	I0827 22:24:29.380160   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.383586   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.384086   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.384115   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.384352   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.384582   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.384775   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.385106   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.385331   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.385537   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.385553   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:24:29.492854   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:24:29.492886   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:24:29.492907   29384 buildroot.go:174] setting up certificates
	I0827 22:24:29.492919   29384 provision.go:84] configureAuth start
	I0827 22:24:29.492930   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.493253   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:29.496205   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.496676   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.496706   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.496850   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.499310   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.499811   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.499839   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.500014   29384 provision.go:143] copyHostCerts
	I0827 22:24:29.500042   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:24:29.500069   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:24:29.500079   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:24:29.500145   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:24:29.500221   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:24:29.500247   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:24:29.500257   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:24:29.500296   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:24:29.500368   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:24:29.500388   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:24:29.500394   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:24:29.500419   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:24:29.500488   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602-m03 san=[127.0.0.1 192.168.39.91 ha-158602-m03 localhost minikube]
	I0827 22:24:29.630247   29384 provision.go:177] copyRemoteCerts
	I0827 22:24:29.630300   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:24:29.630323   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.633003   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.633438   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.633464   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.633664   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.633858   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.634021   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.634153   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:29.714965   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:24:29.715031   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:24:29.738180   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:24:29.738256   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 22:24:29.761405   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:24:29.761482   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:24:29.785628   29384 provision.go:87] duration metric: took 292.694937ms to configureAuth
	I0827 22:24:29.785657   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:24:29.785858   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:29.785943   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.788766   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.789195   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.789217   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.789406   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.789632   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.789778   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.789895   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.790113   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.790272   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.790287   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:24:30.022419   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:24:30.022446   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:24:30.022456   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetURL
	I0827 22:24:30.023886   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using libvirt version 6000000
	I0827 22:24:30.025890   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.026243   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.026274   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.026403   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:24:30.026416   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:24:30.026424   29384 client.go:171] duration metric: took 25.135406733s to LocalClient.Create
	I0827 22:24:30.026449   29384 start.go:167] duration metric: took 25.135470642s to libmachine.API.Create "ha-158602"
	I0827 22:24:30.026463   29384 start.go:293] postStartSetup for "ha-158602-m03" (driver="kvm2")
	I0827 22:24:30.026479   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:24:30.026500   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.026761   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:24:30.026784   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.028978   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.029305   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.029328   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.029461   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.029658   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.029828   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.029990   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.110331   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:24:30.114683   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:24:30.114715   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:24:30.114804   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:24:30.114918   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:24:30.114931   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:24:30.115046   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:24:30.124148   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:24:30.146865   29384 start.go:296] duration metric: took 120.387267ms for postStartSetup
	I0827 22:24:30.146917   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:30.147629   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:30.150260   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.150677   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.150705   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.150927   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:30.151114   29384 start.go:128] duration metric: took 25.278483191s to createHost
	I0827 22:24:30.151134   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.153331   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.153665   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.153693   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.153848   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.154038   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.154211   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.154330   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.154480   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:30.154629   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:30.154639   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:24:30.253676   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797470.232935843
	
	I0827 22:24:30.253701   29384 fix.go:216] guest clock: 1724797470.232935843
	I0827 22:24:30.253712   29384 fix.go:229] Guest: 2024-08-27 22:24:30.232935843 +0000 UTC Remote: 2024-08-27 22:24:30.151124995 +0000 UTC m=+144.459299351 (delta=81.810848ms)
	I0827 22:24:30.253736   29384 fix.go:200] guest clock delta is within tolerance: 81.810848ms
	I0827 22:24:30.253744   29384 start.go:83] releasing machines lock for "ha-158602-m03", held for 25.381228219s
	I0827 22:24:30.253774   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.254044   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:30.257885   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.258339   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.258411   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.260666   29384 out.go:177] * Found network options:
	I0827 22:24:30.261992   29384 out.go:177]   - NO_PROXY=192.168.39.77,192.168.39.142
	W0827 22:24:30.263273   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0827 22:24:30.263300   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:24:30.263318   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.263878   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.264062   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.264147   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:24:30.264192   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	W0827 22:24:30.264267   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0827 22:24:30.264290   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:24:30.264347   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:24:30.264363   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.267160   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267307   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267579   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.267605   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267782   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.267948   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.267971   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267972   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.268133   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.268191   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.268298   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.268385   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.268448   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.268604   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.493925   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:24:30.500126   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:24:30.500179   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:24:30.515978   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:24:30.515999   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:24:30.516069   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:24:30.532827   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:24:30.551267   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:24:30.551335   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:24:30.564779   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:24:30.578641   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:24:30.699297   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:24:30.868373   29384 docker.go:233] disabling docker service ...
	I0827 22:24:30.868443   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:24:30.882109   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:24:30.894160   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:24:31.007677   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:24:31.132026   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:24:31.145973   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:24:31.164500   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:24:31.164567   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.174821   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:24:31.174880   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.184755   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.195049   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.205076   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:24:31.216111   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.225938   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.242393   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.252457   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:24:31.261503   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:24:31.261564   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:24:31.274618   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:24:31.284766   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:31.408223   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:24:31.498819   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:24:31.498885   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:24:31.503305   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:24:31.503341   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:24:31.506812   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:24:31.546189   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:24:31.546268   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:24:31.576994   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:24:31.604550   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:24:31.605653   29384 out.go:177]   - env NO_PROXY=192.168.39.77
	I0827 22:24:31.606714   29384 out.go:177]   - env NO_PROXY=192.168.39.77,192.168.39.142
	I0827 22:24:31.608035   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:31.611599   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:31.612059   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:31.612084   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:31.612285   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:24:31.616326   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:24:31.628845   29384 mustload.go:65] Loading cluster: ha-158602
	I0827 22:24:31.629094   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:31.629335   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:31.629369   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:31.643988   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0827 22:24:31.644501   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:31.645013   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:31.645027   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:31.645366   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:31.645542   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:24:31.646891   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:24:31.647169   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:31.647210   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:31.663133   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0827 22:24:31.663491   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:31.663934   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:31.663954   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:31.664237   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:31.664416   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:24:31.664592   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.91
	I0827 22:24:31.664605   29384 certs.go:194] generating shared ca certs ...
	I0827 22:24:31.664626   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.664752   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:24:31.664812   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:24:31.664826   29384 certs.go:256] generating profile certs ...
	I0827 22:24:31.664919   29384 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:24:31.664951   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387
	I0827 22:24:31.664973   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.91 192.168.39.254]
	I0827 22:24:31.826242   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 ...
	I0827 22:24:31.826270   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387: {Name:mkc02f69cd5a3b130232a3c673e047eaa95570fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.826430   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387 ...
	I0827 22:24:31.826442   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387: {Name:mkd84ac9539a4b0a8e9556967b7d93a1480590fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.826507   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:24:31.826646   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:24:31.826765   29384 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:24:31.826781   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:24:31.826794   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:24:31.826805   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:24:31.826819   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:24:31.826831   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:24:31.826843   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:24:31.826855   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:24:31.826866   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:24:31.826909   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:24:31.826934   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:24:31.826943   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:24:31.826966   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:24:31.826987   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:24:31.827007   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:24:31.827043   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:24:31.827069   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:24:31.827083   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:31.827095   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:24:31.827126   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:24:31.830162   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:31.830639   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:24:31.830667   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:31.830891   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:24:31.831106   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:24:31.831277   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:24:31.831466   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:24:31.908859   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0827 22:24:31.913741   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0827 22:24:31.928310   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0827 22:24:31.932553   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0827 22:24:31.943592   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0827 22:24:31.947657   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0827 22:24:31.957815   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0827 22:24:31.961448   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0827 22:24:31.971381   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0827 22:24:31.975451   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0827 22:24:31.984717   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0827 22:24:31.988487   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0827 22:24:32.000232   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:24:32.023455   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:24:32.046362   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:24:32.068702   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:24:32.090208   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0827 22:24:32.113468   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:24:32.137053   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:24:32.160293   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:24:32.183753   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:24:32.205646   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:24:32.227241   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:24:32.249455   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0827 22:24:32.264950   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0827 22:24:32.280275   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0827 22:24:32.295688   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0827 22:24:32.310808   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0827 22:24:32.326689   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0827 22:24:32.342889   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0827 22:24:32.359085   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:24:32.364803   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:24:32.375530   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.380385   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.380454   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.386694   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:24:32.397159   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:24:32.407586   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.411606   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.411664   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.416828   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:24:32.427230   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:24:32.437521   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.441687   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.441739   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.446918   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:24:32.457231   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:24:32.460934   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:24:32.460982   29384 kubeadm.go:934] updating node {m03 192.168.39.91 8443 v1.31.0 crio true true} ...
	I0827 22:24:32.461053   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:24:32.461077   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:24:32.461109   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:24:32.479250   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:24:32.479323   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:24:32.479384   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:24:32.488896   29384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0827 22:24:32.488963   29384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0827 22:24:32.498043   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0827 22:24:32.498065   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0827 22:24:32.498072   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:24:32.498079   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:24:32.498141   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:24:32.498143   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:24:32.498043   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0827 22:24:32.498272   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:24:32.505434   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0827 22:24:32.505469   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0827 22:24:32.505516   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0827 22:24:32.505545   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0827 22:24:32.534867   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:24:32.534982   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:24:32.615035   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0827 22:24:32.615070   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0827 22:24:33.364326   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0827 22:24:33.373755   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0827 22:24:33.392011   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:24:33.407687   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:24:33.423475   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:24:33.427162   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:24:33.438995   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:33.574498   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:24:33.593760   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:24:33.594113   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:33.594163   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:33.610086   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0827 22:24:33.610556   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:33.611079   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:33.611104   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:33.611464   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:33.611705   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:24:33.611879   29384 start.go:317] joinCluster: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:24:33.612032   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0827 22:24:33.612052   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:24:33.614979   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:33.615446   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:24:33.615480   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:33.615607   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:24:33.615793   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:24:33.615971   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:24:33.616122   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:24:33.772325   29384 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:33.772384   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vkba7b.6o5mdwymayp2q8ew --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m03 --control-plane --apiserver-advertise-address=192.168.39.91 --apiserver-bind-port=8443"
	I0827 22:24:57.256656   29384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vkba7b.6o5mdwymayp2q8ew --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m03 --control-plane --apiserver-advertise-address=192.168.39.91 --apiserver-bind-port=8443": (23.484245395s)
	I0827 22:24:57.256693   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0827 22:24:57.833197   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602-m03 minikube.k8s.io/updated_at=2024_08_27T22_24_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=false
	I0827 22:24:57.980079   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-158602-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0827 22:24:58.093149   29384 start.go:319] duration metric: took 24.481266634s to joinCluster
	I0827 22:24:58.093232   29384 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:58.093529   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:58.094736   29384 out.go:177] * Verifying Kubernetes components...
	I0827 22:24:58.095953   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:58.323812   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:24:58.340373   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:24:58.340720   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0827 22:24:58.340780   29384 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.77:8443
	I0827 22:24:58.341049   29384 node_ready.go:35] waiting up to 6m0s for node "ha-158602-m03" to be "Ready" ...
	I0827 22:24:58.341135   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:58.341145   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:58.341156   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:58.341164   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:58.344828   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:58.841182   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:58.841220   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:58.841230   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:58.841238   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:58.844898   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:59.341770   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:59.341794   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:59.341804   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:59.341809   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:59.365046   29384 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0827 22:24:59.841468   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:59.841492   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:59.841502   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:59.841508   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:59.844958   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:00.341213   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:00.341234   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:00.341242   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:00.341246   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:00.345165   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:00.345780   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:00.841361   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:00.841385   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:00.841397   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:00.841402   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:00.845196   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:01.341738   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:01.341765   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:01.341776   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:01.341790   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:01.347416   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:25:01.842065   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:01.842086   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:01.842094   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:01.842099   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:01.845230   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:02.342005   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:02.342026   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:02.342034   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:02.342040   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:02.345683   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:02.346573   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:02.841885   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:02.841909   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:02.841919   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:02.841923   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:02.845649   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:03.341761   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:03.341782   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:03.341792   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:03.341799   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:03.345961   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:03.841372   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:03.841396   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:03.841404   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:03.841410   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:03.844810   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:04.341710   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:04.341731   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:04.341739   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:04.341743   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:04.350472   29384 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0827 22:25:04.351534   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:04.842285   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:04.842337   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:04.842345   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:04.842350   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:04.845439   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:05.341967   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:05.341995   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:05.342008   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:05.342012   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:05.346630   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:05.841422   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:05.841446   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:05.841457   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:05.841463   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:05.844685   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.341960   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:06.341980   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:06.341988   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:06.341991   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:06.345392   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.842039   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:06.842061   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:06.842069   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:06.842072   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:06.845497   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.846198   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:07.341301   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:07.341339   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:07.341351   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:07.341357   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:07.344768   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:07.841623   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:07.841645   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:07.841653   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:07.841658   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:07.845281   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.342263   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:08.342286   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:08.342296   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:08.342301   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:08.346298   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.841710   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:08.841731   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:08.841740   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:08.841745   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:08.845551   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.846348   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:09.341358   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:09.341383   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:09.341391   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:09.341394   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:09.349248   29384 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0827 22:25:09.841489   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:09.841512   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:09.841520   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:09.841523   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:09.844713   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:10.341500   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:10.341530   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:10.341542   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:10.341550   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:10.344750   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:10.841338   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:10.841357   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:10.841365   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:10.841375   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:10.844194   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:11.341627   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:11.341665   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:11.341673   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:11.341678   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:11.344879   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:11.345463   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:11.841742   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:11.841764   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:11.841772   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:11.841776   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:11.845181   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:12.342112   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:12.342134   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:12.342142   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:12.342147   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:12.345618   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:12.841355   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:12.841389   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:12.841398   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:12.841402   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:12.844955   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:13.341690   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:13.341710   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:13.341720   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:13.341728   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:13.345304   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:13.346042   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:13.841770   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:13.841797   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:13.841807   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:13.841813   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:13.846456   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:14.342238   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:14.342266   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.342279   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.342285   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.345294   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.841748   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:14.841775   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.841785   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.841794   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.845143   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:14.845644   29384 node_ready.go:49] node "ha-158602-m03" has status "Ready":"True"
	I0827 22:25:14.845661   29384 node_ready.go:38] duration metric: took 16.50459208s for node "ha-158602-m03" to be "Ready" ...
	I0827 22:25:14.845670   29384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:25:14.845735   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:14.845746   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.845753   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.845758   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.852444   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:25:14.859174   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.859243   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxzgs
	I0827 22:25:14.859252   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.859259   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.859264   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.862125   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.862874   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.862889   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.862897   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.862902   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.865914   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:14.868714   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.868751   29384 pod_ready.go:82] duration metric: took 9.552798ms for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.868764   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.868828   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x6dcd
	I0827 22:25:14.868839   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.868848   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.868852   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.871739   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.872414   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.872429   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.872436   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.872440   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.875080   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.875596   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.875612   29384 pod_ready.go:82] duration metric: took 6.840862ms for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.875621   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.875666   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602
	I0827 22:25:14.875674   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.875680   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.875684   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.878164   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.878647   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.878659   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.878666   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.878670   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.881013   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.881460   29384 pod_ready.go:93] pod "etcd-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.881474   29384 pod_ready.go:82] duration metric: took 5.84732ms for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.881482   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.881526   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m02
	I0827 22:25:14.881533   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.881540   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.881546   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.883856   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.884470   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:14.884486   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.884497   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.884502   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.886933   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.887476   29384 pod_ready.go:93] pod "etcd-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.887498   29384 pod_ready.go:82] duration metric: took 6.001947ms for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.887512   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.041884   29384 request.go:632] Waited for 154.30673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m03
	I0827 22:25:15.041949   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m03
	I0827 22:25:15.041954   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.041962   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.041967   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.045115   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.241955   29384 request.go:632] Waited for 196.283508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:15.242027   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:15.242033   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.242043   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.242051   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.245012   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:15.245485   29384 pod_ready.go:93] pod "etcd-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:15.245504   29384 pod_ready.go:82] duration metric: took 357.982788ms for pod "etcd-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.245520   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.442694   29384 request.go:632] Waited for 197.104258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:25:15.442771   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:25:15.442777   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.442785   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.442788   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.446249   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.642227   29384 request.go:632] Waited for 195.380269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:15.642281   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:15.642286   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.642293   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.642298   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.646122   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.646596   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:15.646615   29384 pod_ready.go:82] duration metric: took 401.087797ms for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.646626   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.842668   29384 request.go:632] Waited for 195.964234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:25:15.842741   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:25:15.842748   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.842759   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.842770   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.846125   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.042277   29384 request.go:632] Waited for 195.322782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:16.042344   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:16.042350   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.042356   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.042359   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.045670   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.046227   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.046243   29384 pod_ready.go:82] duration metric: took 399.610743ms for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.046253   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.242328   29384 request.go:632] Waited for 196.015123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m03
	I0827 22:25:16.242393   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m03
	I0827 22:25:16.242400   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.242411   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.242418   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.245830   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.442826   29384 request.go:632] Waited for 196.393424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:16.442877   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:16.442882   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.442895   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.442902   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.446118   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.446821   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.446848   29384 pod_ready.go:82] duration metric: took 400.588436ms for pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.446858   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.642040   29384 request.go:632] Waited for 195.123868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:25:16.642123   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:25:16.642131   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.642152   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.642159   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.645748   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.842426   29384 request.go:632] Waited for 195.788855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:16.842489   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:16.842496   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.842509   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.842516   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.845834   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.846437   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.846463   29384 pod_ready.go:82] duration metric: took 399.599593ms for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.846473   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.042603   29384 request.go:632] Waited for 196.065274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:25:17.042676   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:25:17.042681   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.042689   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.042695   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.046600   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.242365   29384 request.go:632] Waited for 194.921203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:17.242426   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:17.242433   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.242443   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.242457   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.247186   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:17.247645   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:17.247666   29384 pod_ready.go:82] duration metric: took 401.176595ms for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.247677   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.442798   29384 request.go:632] Waited for 195.05519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m03
	I0827 22:25:17.442861   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m03
	I0827 22:25:17.442878   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.442886   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.442891   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.446045   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.641881   29384 request.go:632] Waited for 195.274175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:17.641947   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:17.641955   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.641962   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.641970   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.645713   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.646253   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:17.646275   29384 pod_ready.go:82] duration metric: took 398.590477ms for pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.646288   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.842295   29384 request.go:632] Waited for 195.928987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:25:17.842380   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:25:17.842387   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.842399   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.842409   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.846008   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.042394   29384 request.go:632] Waited for 195.35937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:18.042462   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:18.042472   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.042484   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.042493   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.046036   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.046624   29384 pod_ready.go:93] pod "kube-proxy-5pmrv" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.046644   29384 pod_ready.go:82] duration metric: took 400.349246ms for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.046657   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nhjgk" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.242739   29384 request.go:632] Waited for 195.992411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nhjgk
	I0827 22:25:18.242809   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nhjgk
	I0827 22:25:18.242820   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.242833   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.242845   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.245988   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.441820   29384 request.go:632] Waited for 195.243524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:18.441908   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:18.441919   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.441932   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.441938   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.445176   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.445864   29384 pod_ready.go:93] pod "kube-proxy-nhjgk" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.445884   29384 pod_ready.go:82] duration metric: took 399.220525ms for pod "kube-proxy-nhjgk" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.445894   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.642606   29384 request.go:632] Waited for 196.632365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:25:18.642678   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:25:18.642690   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.642699   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.642706   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.645890   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.842187   29384 request.go:632] Waited for 195.34412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:18.842261   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:18.842270   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.842281   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.842286   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.845501   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.846119   29384 pod_ready.go:93] pod "kube-proxy-slgmm" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.846143   29384 pod_ready.go:82] duration metric: took 400.242013ms for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.846157   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.042142   29384 request.go:632] Waited for 195.908855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:25:19.042232   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:25:19.042251   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.042261   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.042282   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.045495   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.242439   29384 request.go:632] Waited for 196.370297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:19.242501   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:19.242506   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.242513   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.242516   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.245992   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.246791   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:19.246811   29384 pod_ready.go:82] duration metric: took 400.645957ms for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.246826   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.441865   29384 request.go:632] Waited for 194.97253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:25:19.441951   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:25:19.441970   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.441994   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.442003   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.444825   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:19.642767   29384 request.go:632] Waited for 197.281156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:19.642844   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:19.642850   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.642857   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.642862   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.646271   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.646844   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:19.646867   29384 pod_ready.go:82] duration metric: took 400.028336ms for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.646881   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.842077   29384 request.go:632] Waited for 195.093907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m03
	I0827 22:25:19.842156   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m03
	I0827 22:25:19.842165   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.842176   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.842186   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.845567   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.042091   29384 request.go:632] Waited for 195.571883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:20.042174   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:20.042180   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.042187   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.042192   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.045760   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.046425   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:20.046446   29384 pod_ready.go:82] duration metric: took 399.5556ms for pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:20.046461   29384 pod_ready.go:39] duration metric: took 5.200779619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:25:20.046481   29384 api_server.go:52] waiting for apiserver process to appear ...
	I0827 22:25:20.046538   29384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:25:20.062650   29384 api_server.go:72] duration metric: took 21.969376334s to wait for apiserver process to appear ...
	I0827 22:25:20.062684   29384 api_server.go:88] waiting for apiserver healthz status ...
	I0827 22:25:20.062704   29384 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0827 22:25:20.068550   29384 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0827 22:25:20.068617   29384 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I0827 22:25:20.068625   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.068634   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.068638   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.069381   29384 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0827 22:25:20.069442   29384 api_server.go:141] control plane version: v1.31.0
	I0827 22:25:20.069452   29384 api_server.go:131] duration metric: took 6.762481ms to wait for apiserver health ...
	I0827 22:25:20.069459   29384 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 22:25:20.242800   29384 request.go:632] Waited for 173.256132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.242854   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.242859   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.242866   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.242872   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.248432   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:25:20.255400   29384 system_pods.go:59] 24 kube-system pods found
	I0827 22:25:20.255438   29384 system_pods.go:61] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:25:20.255446   29384 system_pods.go:61] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:25:20.255453   29384 system_pods.go:61] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:25:20.255458   29384 system_pods.go:61] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:25:20.255465   29384 system_pods.go:61] "etcd-ha-158602-m03" [03c9965b-f795-4663-aeb5-3814314273ff] Running
	I0827 22:25:20.255470   29384 system_pods.go:61] "kindnet-9wgcl" [e7f9bf39-41d1-4ea2-9778-78aa3e0dd9c2] Running
	I0827 22:25:20.255475   29384 system_pods.go:61] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:25:20.255480   29384 system_pods.go:61] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:25:20.255493   29384 system_pods.go:61] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:25:20.255499   29384 system_pods.go:61] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:25:20.255504   29384 system_pods.go:61] "kube-apiserver-ha-158602-m03" [5b0573ad-9bbc-4ea4-9bbf-f7cd0084a028] Running
	I0827 22:25:20.255509   29384 system_pods.go:61] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:25:20.255514   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:25:20.255518   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m03" [b1bdc020-b729-4576-91f9-7d7055ebabd3] Running
	I0827 22:25:20.255523   29384 system_pods.go:61] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:25:20.255528   29384 system_pods.go:61] "kube-proxy-nhjgk" [f21dff1b-96f0-4ee5-9ad4-524cd4948de1] Running
	I0827 22:25:20.255533   29384 system_pods.go:61] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:25:20.255538   29384 system_pods.go:61] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:25:20.255543   29384 system_pods.go:61] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:25:20.255551   29384 system_pods.go:61] "kube-scheduler-ha-158602-m03" [41ec8f3e-cf73-4447-8e88-1dde3e8d4274] Running
	I0827 22:25:20.255556   29384 system_pods.go:61] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:25:20.255561   29384 system_pods.go:61] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:25:20.255568   29384 system_pods.go:61] "kube-vip-ha-158602-m03" [6fbee1d2-e66b-447a-9f9a-1e477fc0af06] Running
	I0827 22:25:20.255574   29384 system_pods.go:61] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:25:20.255580   29384 system_pods.go:74] duration metric: took 186.113164ms to wait for pod list to return data ...
	I0827 22:25:20.255591   29384 default_sa.go:34] waiting for default service account to be created ...
	I0827 22:25:20.441947   29384 request.go:632] Waited for 186.283914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:25:20.441999   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:25:20.442005   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.442013   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.442018   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.446197   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:20.446351   29384 default_sa.go:45] found service account: "default"
	I0827 22:25:20.446369   29384 default_sa.go:55] duration metric: took 190.773407ms for default service account to be created ...
	I0827 22:25:20.446378   29384 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 22:25:20.642703   29384 request.go:632] Waited for 196.239188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.642765   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.642773   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.642783   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.642789   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.649486   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:25:20.655662   29384 system_pods.go:86] 24 kube-system pods found
	I0827 22:25:20.655689   29384 system_pods.go:89] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:25:20.655695   29384 system_pods.go:89] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:25:20.655699   29384 system_pods.go:89] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:25:20.655704   29384 system_pods.go:89] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:25:20.655707   29384 system_pods.go:89] "etcd-ha-158602-m03" [03c9965b-f795-4663-aeb5-3814314273ff] Running
	I0827 22:25:20.655710   29384 system_pods.go:89] "kindnet-9wgcl" [e7f9bf39-41d1-4ea2-9778-78aa3e0dd9c2] Running
	I0827 22:25:20.655713   29384 system_pods.go:89] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:25:20.655717   29384 system_pods.go:89] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:25:20.655721   29384 system_pods.go:89] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:25:20.655726   29384 system_pods.go:89] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:25:20.655731   29384 system_pods.go:89] "kube-apiserver-ha-158602-m03" [5b0573ad-9bbc-4ea4-9bbf-f7cd0084a028] Running
	I0827 22:25:20.655738   29384 system_pods.go:89] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:25:20.655743   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:25:20.655749   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m03" [b1bdc020-b729-4576-91f9-7d7055ebabd3] Running
	I0827 22:25:20.655759   29384 system_pods.go:89] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:25:20.655765   29384 system_pods.go:89] "kube-proxy-nhjgk" [f21dff1b-96f0-4ee5-9ad4-524cd4948de1] Running
	I0827 22:25:20.655774   29384 system_pods.go:89] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:25:20.655779   29384 system_pods.go:89] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:25:20.655782   29384 system_pods.go:89] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:25:20.655786   29384 system_pods.go:89] "kube-scheduler-ha-158602-m03" [41ec8f3e-cf73-4447-8e88-1dde3e8d4274] Running
	I0827 22:25:20.655790   29384 system_pods.go:89] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:25:20.655793   29384 system_pods.go:89] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:25:20.655799   29384 system_pods.go:89] "kube-vip-ha-158602-m03" [6fbee1d2-e66b-447a-9f9a-1e477fc0af06] Running
	I0827 22:25:20.655803   29384 system_pods.go:89] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:25:20.655811   29384 system_pods.go:126] duration metric: took 209.428401ms to wait for k8s-apps to be running ...
	I0827 22:25:20.655820   29384 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 22:25:20.655871   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:25:20.671528   29384 system_svc.go:56] duration metric: took 15.695486ms WaitForService to wait for kubelet
	I0827 22:25:20.671571   29384 kubeadm.go:582] duration metric: took 22.578302265s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:25:20.671602   29384 node_conditions.go:102] verifying NodePressure condition ...
	I0827 22:25:20.842486   29384 request.go:632] Waited for 170.805433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I0827 22:25:20.842549   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I0827 22:25:20.842559   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.842570   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.842580   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.846221   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.847223   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847251   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847263   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847267   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847271   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847274   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847278   29384 node_conditions.go:105] duration metric: took 175.670372ms to run NodePressure ...
	I0827 22:25:20.847289   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:25:20.847308   29384 start.go:255] writing updated cluster config ...
	I0827 22:25:20.847633   29384 ssh_runner.go:195] Run: rm -f paused
	I0827 22:25:20.898987   29384 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 22:25:20.901075   29384 out.go:177] * Done! kubectl is now configured to use "ha-158602" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.259116267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797738259091113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=959f9716-a17a-4478-b09d-747a472c5b73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.259795182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38672aee-67f2-40de-86ca-b5d76be3c34f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.259864696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38672aee-67f2-40de-86ca-b5d76be3c34f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.260106707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38672aee-67f2-40de-86ca-b5d76be3c34f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.294111059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3e86e90-bcc2-4eca-b4f7-fe0ef8bd5a1b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.294184150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3e86e90-bcc2-4eca-b4f7-fe0ef8bd5a1b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.295749119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a5ef69a-4ab8-4fa1-8254-8749188a59c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.296194318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797738296170112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a5ef69a-4ab8-4fa1-8254-8749188a59c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.296672358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67517b3c-9d8e-490e-8a1f-07ca3994a27b name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.296735503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67517b3c-9d8e-490e-8a1f-07ca3994a27b name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.296978974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67517b3c-9d8e-490e-8a1f-07ca3994a27b name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.335584749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1a5b4d4-5fa5-499b-a1fd-e851dac88e3a name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.335671244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1a5b4d4-5fa5-499b-a1fd-e851dac88e3a name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.337321645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82cfc1d7-6ef9-413c-b545-73ada7a898bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.337827069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797738337801621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82cfc1d7-6ef9-413c-b545-73ada7a898bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.338661019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fdd14b4-0b72-4dd1-a13b-486c7345425f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.338713058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fdd14b4-0b72-4dd1-a13b-486c7345425f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.339049891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fdd14b4-0b72-4dd1-a13b-486c7345425f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.375008338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7643f7b2-250b-490d-9cb1-0564bfe50db7 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.375079279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7643f7b2-250b-490d-9cb1-0564bfe50db7 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.376493348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22ca6cb5-4988-4a95-b696-4d804dd4bd0d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.376933871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797738376910758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22ca6cb5-4988-4a95-b696-4d804dd4bd0d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.377341146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14a17628-0a7a-48d3-9985-8e181e6e4bc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.377389607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14a17628-0a7a-48d3-9985-8e181e6e4bc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:28:58 ha-158602 crio[666]: time="2024-08-27 22:28:58.377681325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14a17628-0a7a-48d3-9985-8e181e6e4bc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6577993a571ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4f329cad0ee8c       busybox-7dff88458-gxvsc
	70a0959d7fc34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   922e19e19e6b3       coredns-6f6b679f8f-x6dcd
	c1556743f3ed7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7e95e9aaf3336       coredns-6f6b679f8f-jxzgs
	4d999c4b0e96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   ffbe4fc48196e       storage-provisioner
	9006fd58dfc63       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   d113f6cede364       kindnet-kb84t
	79ea4c0053fb1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   240775e6cca6c       kube-proxy-5pmrv
	a18851305e21f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   c714036efe686       kube-vip-ha-158602
	eb6e08e1cf880       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   71d74ecb9f300       etcd-ha-158602
	961aabfc8401a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   807fa831db17b       kube-controller-manager-ha-158602
	ad2032c0ac674       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   ec7216a9fc947       kube-apiserver-ha-158602
	60feae8b5d1f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   5e03fa37bf662       kube-scheduler-ha-158602
	
	
	==> coredns [70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d] <==
	[INFO] 10.244.1.2:58445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003689264s
	[INFO] 10.244.1.2:40506 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145422s
	[INFO] 10.244.0.4:39982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136663s
	[INFO] 10.244.0.4:43032 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001634431s
	[INFO] 10.244.0.4:57056 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135477s
	[INFO] 10.244.0.4:60425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128187s
	[INFO] 10.244.0.4:33910 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092983s
	[INFO] 10.244.2.2:55029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617414s
	[INFO] 10.244.2.2:43643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085283s
	[INFO] 10.244.2.2:33596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116719s
	[INFO] 10.244.1.2:36406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011994s
	[INFO] 10.244.1.2:45944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072161s
	[INFO] 10.244.0.4:34595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083932s
	[INFO] 10.244.0.4:56369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051489s
	[INFO] 10.244.0.4:45069 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052963s
	[INFO] 10.244.2.2:41980 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118063s
	[INFO] 10.244.1.2:35610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170436s
	[INFO] 10.244.1.2:39033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193301s
	[INFO] 10.244.1.2:58078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123451s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128271s
	[INFO] 10.244.0.4:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010237s
	[INFO] 10.244.0.4:58359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080338s
	[INFO] 10.244.2.2:35482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009539s
	[INFO] 10.244.2.2:45798 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087557s
	[INFO] 10.244.2.2:39340 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090317s
	
	
	==> coredns [c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3] <==
	[INFO] 10.244.1.2:46115 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011856798s
	[INFO] 10.244.0.4:48603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221468s
	[INFO] 10.244.0.4:42021 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000076185s
	[INFO] 10.244.1.2:49292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012663s
	[INFO] 10.244.1.2:34885 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226601s
	[INFO] 10.244.1.2:54874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014649s
	[INFO] 10.244.1.2:34031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187993s
	[INFO] 10.244.1.2:39560 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019907s
	[INFO] 10.244.0.4:43688 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012926s
	[INFO] 10.244.0.4:51548 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001519492s
	[INFO] 10.244.0.4:58561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052435s
	[INFO] 10.244.2.2:48091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180149s
	[INFO] 10.244.2.2:45077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104198s
	[INFO] 10.244.2.2:41789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215182s
	[INFO] 10.244.2.2:52731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064319s
	[INFO] 10.244.2.2:43957 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126173s
	[INFO] 10.244.1.2:55420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084801s
	[INFO] 10.244.1.2:45306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059642s
	[INFO] 10.244.0.4:46103 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117802s
	[INFO] 10.244.2.2:39675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191879s
	[INFO] 10.244.2.2:43022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100522s
	[INFO] 10.244.2.2:53360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093376s
	[INFO] 10.244.0.4:36426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132899s
	[INFO] 10.244.0.4:42082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000167434s
	[INFO] 10.244.2.2:36926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139785s
	
	
	==> describe nodes <==
	Name:               ha-158602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:22:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:28:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-158602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f393f25de7274e45b62eb7b988ece32c
	  System UUID:                f393f25d-e727-4e45-b62e-b7b988ece32c
	  Boot ID:                    a1b3c582-a6fa-4ddf-91a6-fe921f43a40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxvsc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-6f6b679f8f-jxzgs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 coredns-6f6b679f8f-x6dcd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m8s
	  kube-system                 etcd-ha-158602                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-kb84t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m8s
	  kube-system                 kube-apiserver-ha-158602             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-ha-158602    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-5pmrv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-scheduler-ha-158602             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-158602                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m7s   kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-158602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-158602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-158602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-158602 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	
	
	Name:               ha-158602-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:23:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:26:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    ha-158602-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b63e2f54de44a9e8ad7eb0ee8626bfb
	  System UUID:                1b63e2f5-4de4-4a9e-8ad7-eb0ee8626bfb
	  Boot ID:                    de317c2d-f8b8-42bc-8e7c-1542b778172c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-crtgh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-158602-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m17s
	  kube-system                 kindnet-zmc6v                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-158602-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-controller-manager-ha-158602-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-slgmm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-158602-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-vip-ha-158602-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-158602-m02 status is now: NodeNotReady
	
	
	Name:               ha-158602-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_24_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-158602-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d02faebd780a49dd8e6ae91df2852b5e
	  System UUID:                d02faebd-780a-49dd-8e6a-e91df2852b5e
	  Boot ID:                    5fda21c4-296f-4b36-bb5f-5f3dc48345cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmcwr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-158602-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-9wgcl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-158602-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-controller-manager-ha-158602-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-nhjgk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-158602-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-158602-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-158602-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	
	
	Name:               ha-158602-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:25:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:28:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:26:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-158602-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad10535aaed444b79090a76efa3929c7
	  System UUID:                ad10535a-aed4-44b7-9090-a76efa3929c7
	  Boot ID:                    a9c768c5-396c-462d-ba6b-654fe7bbf53a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6szl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-658sj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-158602-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug27 22:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.699214] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.788922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.878829] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.217797] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053782] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198923] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125102] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.284457] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.718918] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.171591] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.060183] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.161491] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.086175] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.529529] kauditd_printk_skb: 21 callbacks suppressed
	[Aug27 22:23] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.211142] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff] <==
	{"level":"warn","ts":"2024-08-27T22:28:58.552856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.632076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.641147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.645337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.653565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.660095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.667750Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.674837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.678798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.682183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.688506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.694330Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.700545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.704649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.708336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.715290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.721431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.727552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.730919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.733996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.737581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.743400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.748788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.753295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:28:58.825966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:28:58 up 6 min,  0 users,  load average: 0.16, 0.22, 0.11
	Linux ha-158602 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03] <==
	I0827 22:28:25.265697       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:28:35.268631       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:28:35.268701       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:28:35.268928       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:28:35.268953       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:28:35.269018       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:28:35.269040       1 main.go:299] handling current node
	I0827 22:28:35.269058       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:28:35.269064       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:28:45.271727       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:28:45.271845       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:28:45.272080       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:28:45.272112       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:28:45.272197       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:28:45.272217       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:28:45.272285       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:28:45.272304       1 main.go:299] handling current node
	I0827 22:28:55.262545       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:28:55.263389       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:28:55.263684       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:28:55.263707       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:28:55.263799       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:28:55.263817       1 main.go:299] handling current node
	I0827 22:28:55.263838       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:28:55.263858       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d] <==
	I0827 22:22:44.133834       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0827 22:22:44.141362       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77]
	I0827 22:22:44.142528       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:22:44.152024       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 22:22:44.442387       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:22:45.179875       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:22:45.199550       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0827 22:22:45.352262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:22:49.950652       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0827 22:22:50.044540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0827 22:25:26.119981       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55998: use of closed network connection
	E0827 22:25:26.309740       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56020: use of closed network connection
	E0827 22:25:26.491976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56030: use of closed network connection
	E0827 22:25:26.683907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56056: use of closed network connection
	E0827 22:25:26.865254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56076: use of closed network connection
	E0827 22:25:27.040369       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56092: use of closed network connection
	E0827 22:25:27.215029       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56118: use of closed network connection
	E0827 22:25:27.386248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56142: use of closed network connection
	E0827 22:25:27.554363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56158: use of closed network connection
	E0827 22:25:27.831918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56182: use of closed network connection
	E0827 22:25:28.002684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56192: use of closed network connection
	E0827 22:25:28.182151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56214: use of closed network connection
	E0827 22:25:28.347892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56228: use of closed network connection
	E0827 22:25:28.521779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56246: use of closed network connection
	E0827 22:25:28.679982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56270: use of closed network connection
	
	
	==> kube-controller-manager [961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280] <==
	I0827 22:25:57.734085       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-158602-m04" podCIDRs=["10.244.3.0/24"]
	I0827 22:25:57.734219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.734326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.752966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.905629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:58.087759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:58.285593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:59.357632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:59.358025       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-158602-m04"
	I0827 22:25:59.478306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:02.256969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:02.304607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:07.754974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:18.322634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:18.322793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-158602-m04"
	I0827 22:26:18.338119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:19.375837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:28.373575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:27:13.048965       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-158602-m04"
	I0827 22:27:13.049288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:13.073556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:13.083138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.578896ms"
	I0827 22:27:13.083763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.113µs"
	I0827 22:27:14.451256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:18.293809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	
	
	==> kube-proxy [79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:22:51.012541       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:22:51.029420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0827 22:22:51.029562       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:22:51.070953       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:22:51.071047       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:22:51.071093       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:22:51.073377       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:22:51.073729       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:22:51.073783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:22:51.075157       1 config.go:197] "Starting service config controller"
	I0827 22:22:51.075295       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:22:51.075385       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:22:51.075407       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:22:51.077356       1 config.go:326] "Starting node config controller"
	I0827 22:22:51.077397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:22:51.175473       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:22:51.175537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:22:51.177574       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f] <==
	E0827 22:22:43.457140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.515725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.515773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.607309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.607360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.612593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.612665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.690576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0827 22:22:43.690707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.708822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.708922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.744560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 22:22:43.744717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.769502       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 22:22:43.769600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.826851       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:22:43.828024       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:22:46.441310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:25:57.773909       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.774761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-658sj"
	I0827 22:25:57.775154       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.831035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:25:57.831164       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f48452c-8a4b-403b-9da9-90f2dab5ec70(kube-system/kube-proxy-d6zj9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d6zj9"
	E0827 22:25:57.831230       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-d6zj9"
	I0827 22:25:57.831281       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	
	
	==> kubelet <==
	Aug 27 22:27:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:27:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:27:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:27:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:27:45 ha-158602 kubelet[1308]: E0827 22:27:45.431578    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797665431235006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:27:45 ha-158602 kubelet[1308]: E0827 22:27:45.431608    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797665431235006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:27:55 ha-158602 kubelet[1308]: E0827 22:27:55.433836    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797675433433617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:27:55 ha-158602 kubelet[1308]: E0827 22:27:55.434275    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797675433433617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:05 ha-158602 kubelet[1308]: E0827 22:28:05.436123    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797685435570848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:05 ha-158602 kubelet[1308]: E0827 22:28:05.436427    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797685435570848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:15 ha-158602 kubelet[1308]: E0827 22:28:15.438818    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797695438493699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:15 ha-158602 kubelet[1308]: E0827 22:28:15.438872    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797695438493699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:25 ha-158602 kubelet[1308]: E0827 22:28:25.441225    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797705440691505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:25 ha-158602 kubelet[1308]: E0827 22:28:25.441866    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797705440691505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:35 ha-158602 kubelet[1308]: E0827 22:28:35.443852    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797715443357537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:35 ha-158602 kubelet[1308]: E0827 22:28:35.443889    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797715443357537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.363880    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:28:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.446063    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797725445700751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.446089    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797725445700751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:55 ha-158602 kubelet[1308]: E0827 22:28:55.448223    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797735447721130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:55 ha-158602 kubelet[1308]: E0827 22:28:55.448279    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797735447721130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-158602 -n ha-158602
helpers_test.go:261: (dbg) Run:  kubectl --context ha-158602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
E0827 22:29:05.109281   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (3.193063993s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:03.302500   34196 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:03.302816   34196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:03.302832   34196 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:03.302840   34196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:03.303063   34196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:03.303269   34196 out.go:352] Setting JSON to false
	I0827 22:29:03.303299   34196 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:03.303363   34196 notify.go:220] Checking for updates...
	I0827 22:29:03.303801   34196 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:03.303824   34196 status.go:255] checking status of ha-158602 ...
	I0827 22:29:03.304280   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.304329   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.319957   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0827 22:29:03.320409   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.321076   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.321106   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.321426   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.321601   34196 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:03.323165   34196 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:03.323178   34196 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:03.323506   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.323543   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.339024   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
	I0827 22:29:03.339449   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.339952   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.339967   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.340387   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.340599   34196 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:03.343536   34196 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:03.343972   34196 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:03.343999   34196 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:03.344089   34196 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:03.344395   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.344449   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.359020   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0827 22:29:03.359593   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.360069   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.360091   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.360455   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.360671   34196 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:03.360860   34196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:03.360899   34196 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:03.364104   34196 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:03.364599   34196 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:03.364635   34196 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:03.364779   34196 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:03.364944   34196 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:03.365105   34196 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:03.365252   34196 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:03.448036   34196 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:03.454017   34196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:03.468875   34196 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:03.468903   34196 api_server.go:166] Checking apiserver status ...
	I0827 22:29:03.468942   34196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:03.485541   34196 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:03.494700   34196 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:03.494744   34196 ssh_runner.go:195] Run: ls
	I0827 22:29:03.498821   34196 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:03.505456   34196 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:03.505477   34196 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:03.505487   34196 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:03.505508   34196 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:03.505826   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.505862   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.520376   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0827 22:29:03.520775   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.521213   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.521230   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.521556   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.521735   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:03.523134   34196 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:03.523159   34196 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:03.523445   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.523485   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.538225   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0827 22:29:03.538605   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.538998   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.539027   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.539278   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.539430   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:03.542155   34196 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:03.542509   34196 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:03.542541   34196 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:03.542676   34196 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:03.542994   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:03.543035   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:03.558055   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0827 22:29:03.558520   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:03.559002   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:03.559021   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:03.559306   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:03.559512   34196 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:03.559714   34196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:03.559736   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:03.562470   34196 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:03.562877   34196 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:03.562911   34196 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:03.563049   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:03.563250   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:03.563429   34196 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:03.563592   34196 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:06.116799   34196 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:06.116897   34196 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:06.116916   34196 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:06.116923   34196 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:06.116940   34196 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:06.116948   34196 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:06.117257   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.117305   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.131966   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I0827 22:29:06.132414   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.132926   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.132950   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.133223   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.133451   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:06.135009   34196 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:06.135022   34196 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:06.135328   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.135365   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.149407   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0827 22:29:06.149805   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.150246   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.150262   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.150625   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.150872   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:06.153717   34196 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:06.154209   34196 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:06.154234   34196 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:06.154491   34196 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:06.154881   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.154929   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.170413   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0827 22:29:06.170824   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.171324   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.171350   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.171616   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.171799   34196 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:06.171956   34196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:06.171978   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:06.174737   34196 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:06.175113   34196 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:06.175132   34196 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:06.175265   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:06.175442   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:06.175609   34196 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:06.175714   34196 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:06.255855   34196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:06.270594   34196 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:06.270627   34196 api_server.go:166] Checking apiserver status ...
	I0827 22:29:06.270667   34196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:06.284308   34196 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:06.294649   34196 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:06.294695   34196 ssh_runner.go:195] Run: ls
	I0827 22:29:06.298633   34196 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:06.302911   34196 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:06.302935   34196 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:06.302946   34196 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:06.302964   34196 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:06.303348   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.303398   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.318407   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0827 22:29:06.318821   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.319289   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.319308   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.319664   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.319866   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:06.321458   34196 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:06.321472   34196 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:06.321744   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.321774   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.336269   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0827 22:29:06.336720   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.337149   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.337168   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.337475   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.337653   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:06.340243   34196 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:06.340660   34196 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:06.340692   34196 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:06.340899   34196 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:06.341307   34196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:06.341394   34196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:06.356059   34196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0827 22:29:06.356403   34196 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:06.356889   34196 main.go:141] libmachine: Using API Version  1
	I0827 22:29:06.356911   34196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:06.357204   34196 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:06.357474   34196 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:06.357655   34196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:06.357677   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:06.360212   34196 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:06.360534   34196 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:06.360577   34196 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:06.360692   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:06.360867   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:06.361004   34196 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:06.361130   34196 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:06.440215   34196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:06.453726   34196 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (4.800807462s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:07.850302   34296 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:07.850416   34296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:07.850425   34296 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:07.850429   34296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:07.850590   34296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:07.850760   34296 out.go:352] Setting JSON to false
	I0827 22:29:07.850784   34296 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:07.850823   34296 notify.go:220] Checking for updates...
	I0827 22:29:07.851192   34296 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:07.851211   34296 status.go:255] checking status of ha-158602 ...
	I0827 22:29:07.851638   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:07.851681   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:07.867470   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0827 22:29:07.867977   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:07.868689   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:07.868715   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:07.869047   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:07.869279   34296 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:07.870989   34296 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:07.871006   34296 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:07.871373   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:07.871409   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:07.886332   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0827 22:29:07.886750   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:07.887187   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:07.887215   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:07.887498   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:07.887684   34296 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:07.890393   34296 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:07.890899   34296 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:07.890927   34296 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:07.891090   34296 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:07.891460   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:07.891543   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:07.908723   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0827 22:29:07.909196   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:07.909669   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:07.909689   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:07.909996   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:07.910218   34296 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:07.910451   34296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:07.910483   34296 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:07.913412   34296 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:07.913864   34296 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:07.913891   34296 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:07.914098   34296 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:07.914290   34296 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:07.914495   34296 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:07.914688   34296 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:08.000417   34296 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:08.006615   34296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:08.022304   34296 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:08.022338   34296 api_server.go:166] Checking apiserver status ...
	I0827 22:29:08.022380   34296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:08.039236   34296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:08.056766   34296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:08.056830   34296 ssh_runner.go:195] Run: ls
	I0827 22:29:08.061040   34296 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:08.067108   34296 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:08.067131   34296 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:08.067140   34296 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:08.067155   34296 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:08.067458   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:08.067496   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:08.082758   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0827 22:29:08.083157   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:08.083751   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:08.083771   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:08.084155   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:08.084358   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:08.085827   34296 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:08.085840   34296 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:08.086106   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:08.086136   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:08.100859   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0827 22:29:08.101281   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:08.101807   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:08.101837   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:08.102140   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:08.102309   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:08.104999   34296 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:08.105508   34296 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:08.105524   34296 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:08.105678   34296 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:08.105949   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:08.105985   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:08.120852   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0827 22:29:08.121393   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:08.121819   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:08.121840   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:08.122244   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:08.122446   34296 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:08.122647   34296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:08.122669   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:08.125286   34296 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:08.125732   34296 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:08.125753   34296 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:08.125876   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:08.126028   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:08.126197   34296 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:08.126468   34296 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:09.192791   34296 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:09.192832   34296 retry.go:31] will retry after 158.382033ms: dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:12.260813   34296 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:12.260904   34296 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:12.260923   34296 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:12.260940   34296 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:12.260964   34296 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:12.260978   34296 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:12.261289   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.261335   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.278130   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0827 22:29:12.278550   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.279035   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.279056   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.279449   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.279671   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:12.281663   34296 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:12.281680   34296 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:12.281970   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.282028   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.296417   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0827 22:29:12.296847   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.297347   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.297367   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.297651   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.297840   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:12.300425   34296 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:12.300786   34296 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:12.300805   34296 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:12.300913   34296 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:12.301300   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.301367   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.317316   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0827 22:29:12.317727   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.318184   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.318203   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.318490   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.318663   34296 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:12.318838   34296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:12.318855   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:12.321828   34296 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:12.322284   34296 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:12.322313   34296 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:12.322525   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:12.322748   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:12.322924   34296 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:12.323055   34296 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:12.400429   34296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:12.415260   34296 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:12.415288   34296 api_server.go:166] Checking apiserver status ...
	I0827 22:29:12.415327   34296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:12.433260   34296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:12.449405   34296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:12.449464   34296 ssh_runner.go:195] Run: ls
	I0827 22:29:12.453864   34296 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:12.458005   34296 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:12.458029   34296 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:12.458040   34296 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:12.458059   34296 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:12.458383   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.458439   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.474378   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I0827 22:29:12.474854   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.475383   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.475411   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.475752   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.475954   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:12.477561   34296 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:12.477577   34296 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:12.477935   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.477992   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.492847   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0827 22:29:12.493289   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.493708   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.493725   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.494054   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.494240   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:12.497115   34296 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:12.497552   34296 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:12.497599   34296 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:12.497758   34296 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:12.498064   34296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:12.498104   34296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:12.513024   34296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0827 22:29:12.513436   34296 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:12.513883   34296 main.go:141] libmachine: Using API Version  1
	I0827 22:29:12.513911   34296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:12.514225   34296 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:12.514384   34296 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:12.514568   34296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:12.514598   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:12.517285   34296 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:12.517715   34296 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:12.517742   34296 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:12.517856   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:12.518001   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:12.518144   34296 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:12.518258   34296 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:12.594847   34296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:12.607797   34296 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (4.330021219s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:14.681496   34397 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:14.681746   34397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:14.681755   34397 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:14.681759   34397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:14.681947   34397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:14.682166   34397 out.go:352] Setting JSON to false
	I0827 22:29:14.682195   34397 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:14.682332   34397 notify.go:220] Checking for updates...
	I0827 22:29:14.682674   34397 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:14.682689   34397 status.go:255] checking status of ha-158602 ...
	I0827 22:29:14.683074   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.683128   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.702669   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0827 22:29:14.703187   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.703766   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.703791   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.704120   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.704288   34397 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:14.705978   34397 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:14.705996   34397 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:14.706306   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.706347   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.721687   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0827 22:29:14.722125   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.722699   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.722734   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.723019   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.723192   34397 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:14.725965   34397 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:14.726375   34397 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:14.726415   34397 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:14.726483   34397 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:14.726844   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.726891   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.741278   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0827 22:29:14.741645   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.742115   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.742138   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.742466   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.742660   34397 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:14.742831   34397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:14.742855   34397 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:14.745507   34397 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:14.745972   34397 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:14.746009   34397 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:14.746103   34397 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:14.746264   34397 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:14.746426   34397 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:14.746574   34397 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:14.828689   34397 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:14.834582   34397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:14.849554   34397 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:14.849596   34397 api_server.go:166] Checking apiserver status ...
	I0827 22:29:14.849634   34397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:14.862918   34397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:14.873039   34397 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:14.873097   34397 ssh_runner.go:195] Run: ls
	I0827 22:29:14.877405   34397 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:14.881869   34397 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:14.881889   34397 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:14.881900   34397 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:14.881937   34397 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:14.882219   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.882257   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.897299   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0827 22:29:14.897765   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.898201   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.898222   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.898523   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.898747   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:14.900458   34397 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:14.900490   34397 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:14.900793   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.900833   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.915382   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
	I0827 22:29:14.915774   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.916208   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.916230   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.916653   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.916874   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:14.919638   34397 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:14.920078   34397 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:14.920100   34397 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:14.920296   34397 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:14.920774   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:14.920811   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:14.936086   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0827 22:29:14.936540   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:14.937032   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:14.937052   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:14.937410   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:14.937632   34397 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:14.937833   34397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:14.937855   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:14.940490   34397 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:14.940952   34397 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:14.940981   34397 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:14.941131   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:14.941287   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:14.941438   34397 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:14.941544   34397 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:15.332741   34397 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:15.332787   34397 retry.go:31] will retry after 224.552677ms: dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:18.628754   34397 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:18.628843   34397 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:18.628882   34397 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:18.628894   34397 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:18.628911   34397 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:18.628918   34397 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:18.629235   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.629287   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.644312   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0827 22:29:18.644778   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.645210   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.645240   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.645589   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.645806   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:18.647385   34397 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:18.647399   34397 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:18.647691   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.647727   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.663164   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0827 22:29:18.663582   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.664037   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.664057   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.664321   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.664512   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:18.667296   34397 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:18.667757   34397 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:18.667775   34397 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:18.667906   34397 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:18.668299   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.668366   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.683718   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0827 22:29:18.684170   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.684665   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.684687   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.685009   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.685217   34397 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:18.685466   34397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:18.685486   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:18.688044   34397 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:18.688499   34397 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:18.688524   34397 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:18.688722   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:18.688906   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:18.689031   34397 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:18.689157   34397 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:18.771357   34397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:18.787527   34397 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:18.787551   34397 api_server.go:166] Checking apiserver status ...
	I0827 22:29:18.787583   34397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:18.803404   34397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:18.812979   34397 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:18.813042   34397 ssh_runner.go:195] Run: ls
	I0827 22:29:18.817840   34397 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:18.822212   34397 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:18.822235   34397 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:18.822247   34397 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:18.822266   34397 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:18.822668   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.822707   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.837277   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0827 22:29:18.837711   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.838150   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.838167   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.838441   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.838609   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:18.840037   34397 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:18.840050   34397 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:18.840302   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.840336   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.854529   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0827 22:29:18.854985   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.855445   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.855462   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.855740   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.855907   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:18.858384   34397 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:18.858782   34397 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:18.858806   34397 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:18.858954   34397 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:18.859275   34397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:18.859321   34397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:18.873709   34397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0827 22:29:18.874088   34397 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:18.874489   34397 main.go:141] libmachine: Using API Version  1
	I0827 22:29:18.874507   34397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:18.874815   34397 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:18.874987   34397 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:18.875157   34397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:18.875186   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:18.877977   34397 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:18.878485   34397 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:18.878508   34397 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:18.878647   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:18.878806   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:18.878981   34397 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:18.879164   34397 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:18.955584   34397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:18.970304   34397 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (4.924043893s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:20.229984   34496 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:20.230458   34496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:20.230477   34496 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:20.230485   34496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:20.230963   34496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:20.231304   34496 out.go:352] Setting JSON to false
	I0827 22:29:20.231344   34496 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:20.231594   34496 notify.go:220] Checking for updates...
	I0827 22:29:20.232178   34496 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:20.232198   34496 status.go:255] checking status of ha-158602 ...
	I0827 22:29:20.232731   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.232779   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.247633   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0827 22:29:20.248072   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.248614   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.248633   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.249005   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.249229   34496 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:20.251179   34496 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:20.251197   34496 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:20.251598   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.251640   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.267100   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0827 22:29:20.267523   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.268030   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.268059   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.268353   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.268532   34496 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:20.270897   34496 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:20.271381   34496 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:20.271420   34496 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:20.271530   34496 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:20.272021   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.272144   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.286829   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0827 22:29:20.287228   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.287894   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.287917   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.288263   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.288559   34496 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:20.288765   34496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:20.288797   34496 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:20.291660   34496 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:20.292096   34496 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:20.292125   34496 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:20.292295   34496 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:20.292479   34496 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:20.292628   34496 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:20.292765   34496 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:20.376018   34496 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:20.383648   34496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:20.397670   34496 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:20.397707   34496 api_server.go:166] Checking apiserver status ...
	I0827 22:29:20.397745   34496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:20.410846   34496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:20.420655   34496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:20.420701   34496 ssh_runner.go:195] Run: ls
	I0827 22:29:20.424725   34496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:20.428745   34496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:20.428764   34496 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:20.428773   34496 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:20.428788   34496 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:20.429072   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.429100   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.446029   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0827 22:29:20.446413   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.446872   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.446894   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.447187   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.447343   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:20.448962   34496 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:20.448978   34496 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:20.449257   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.449286   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.464528   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34419
	I0827 22:29:20.464965   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.465418   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.465447   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.465846   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.466043   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:20.469111   34496 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:20.469579   34496 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:20.469606   34496 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:20.469775   34496 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:20.470084   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:20.470129   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:20.485008   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0827 22:29:20.485423   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:20.485920   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:20.485942   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:20.486287   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:20.486496   34496 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:20.486690   34496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:20.486711   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:20.489242   34496 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:20.489655   34496 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:20.489688   34496 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:20.489826   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:20.489966   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:20.490068   34496 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:20.490230   34496 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:21.700723   34496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:21.700768   34496 retry.go:31] will retry after 169.590738ms: dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:24.776750   34496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:24.776842   34496 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:24.776859   34496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:24.776881   34496 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:24.776903   34496 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:24.776914   34496 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:24.777322   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:24.777453   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:24.792415   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0827 22:29:24.792847   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:24.793316   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:24.793338   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:24.793615   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:24.793796   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:24.795620   34496 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:24.795637   34496 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:24.796041   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:24.796101   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:24.811233   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45873
	I0827 22:29:24.811655   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:24.812099   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:24.812126   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:24.812420   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:24.812627   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:24.815439   34496 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:24.815810   34496 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:24.815829   34496 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:24.816057   34496 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:24.816378   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:24.816412   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:24.830689   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0827 22:29:24.831084   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:24.831621   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:24.831649   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:24.831961   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:24.832163   34496 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:24.832367   34496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:24.832388   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:24.835428   34496 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:24.835859   34496 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:24.835890   34496 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:24.836066   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:24.836219   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:24.836377   34496 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:24.836519   34496 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:24.912389   34496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:24.928892   34496 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:24.928920   34496 api_server.go:166] Checking apiserver status ...
	I0827 22:29:24.928970   34496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:24.942857   34496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:24.953157   34496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:24.953205   34496 ssh_runner.go:195] Run: ls
	I0827 22:29:24.957771   34496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:24.962386   34496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:24.962407   34496 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:24.962415   34496 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:24.962435   34496 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:24.962735   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:24.962764   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:24.977712   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0827 22:29:24.978138   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:24.978652   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:24.978679   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:24.979074   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:24.979270   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:24.980927   34496 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:24.980945   34496 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:24.981217   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:24.981254   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:24.995925   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0827 22:29:24.996400   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:24.996842   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:24.996864   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:24.997156   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:24.997337   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:24.999947   34496 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:25.000315   34496 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:25.000356   34496 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:25.000517   34496 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:25.000827   34496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:25.000860   34496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:25.016080   34496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0827 22:29:25.016638   34496 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:25.017167   34496 main.go:141] libmachine: Using API Version  1
	I0827 22:29:25.017186   34496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:25.017560   34496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:25.017767   34496 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:25.018005   34496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:25.018028   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:25.020780   34496 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:25.021268   34496 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:25.021287   34496 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:25.021523   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:25.021702   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:25.021853   34496 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:25.022027   34496 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:25.099271   34496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:25.112773   34496 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (3.733239015s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:29.103631   34612 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:29.103853   34612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:29.103860   34612 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:29.103865   34612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:29.104042   34612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:29.104191   34612 out.go:352] Setting JSON to false
	I0827 22:29:29.104214   34612 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:29.104328   34612 notify.go:220] Checking for updates...
	I0827 22:29:29.104593   34612 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:29.104606   34612 status.go:255] checking status of ha-158602 ...
	I0827 22:29:29.104985   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.105039   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.123122   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0827 22:29:29.123531   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.124086   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.124112   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.124505   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.124807   34612 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:29.126435   34612 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:29.126449   34612 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:29.126770   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.126808   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.141904   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0827 22:29:29.142290   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.142725   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.142749   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.143060   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.143236   34612 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:29.146038   34612 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:29.146535   34612 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:29.146564   34612 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:29.146677   34612 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:29.146952   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.146983   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.161704   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0827 22:29:29.162121   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.162560   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.162584   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.162940   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.163137   34612 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:29.163359   34612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:29.163385   34612 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:29.165968   34612 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:29.166385   34612 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:29.166421   34612 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:29.166539   34612 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:29.166716   34612 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:29.166873   34612 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:29.167022   34612 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:29.251725   34612 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:29.257585   34612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:29.277457   34612 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:29.277485   34612 api_server.go:166] Checking apiserver status ...
	I0827 22:29:29.277523   34612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:29.292000   34612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:29.301600   34612 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:29.301652   34612 ssh_runner.go:195] Run: ls
	I0827 22:29:29.306381   34612 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:29.310603   34612 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:29.310625   34612 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:29.310634   34612 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:29.310652   34612 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:29.310945   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.310980   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.326598   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0827 22:29:29.327015   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.327475   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.327494   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.327790   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.328014   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:29.329892   34612 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:29.329923   34612 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:29.330202   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.330236   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.347547   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0827 22:29:29.347933   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.348390   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.348412   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.348736   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.348897   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:29.351708   34612 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:29.352136   34612 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:29.352162   34612 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:29.352317   34612 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:29.352675   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:29.352718   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:29.367514   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0827 22:29:29.368007   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:29.368538   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:29.368559   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:29.368864   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:29.369055   34612 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:29.369240   34612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:29.369261   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:29.372227   34612 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:29.372615   34612 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:29.372650   34612 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:29.372788   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:29.372951   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:29.373106   34612 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:29.373234   34612 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:32.452718   34612 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:32.452801   34612 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:32.452818   34612 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:32.452825   34612 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:32.452841   34612 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:32.452849   34612 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:32.453178   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.453223   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.469000   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38571
	I0827 22:29:32.469405   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.469922   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.469951   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.470293   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.470555   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:32.472203   34612 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:32.472223   34612 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:32.472592   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.472647   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.487988   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I0827 22:29:32.488413   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.488849   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.488883   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.489277   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.489480   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:32.492186   34612 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:32.492745   34612 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:32.492771   34612 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:32.492949   34612 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:32.493331   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.493375   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.508185   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0827 22:29:32.508566   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.509015   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.509044   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.509360   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.509545   34612 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:32.509736   34612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:32.509753   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:32.512630   34612 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:32.513042   34612 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:32.513063   34612 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:32.513218   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:32.513404   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:32.513550   34612 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:32.513678   34612 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:32.591565   34612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:32.605518   34612 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:32.605547   34612 api_server.go:166] Checking apiserver status ...
	I0827 22:29:32.605598   34612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:32.619737   34612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:32.628428   34612 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:32.628489   34612 ssh_runner.go:195] Run: ls
	I0827 22:29:32.633450   34612 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:32.640206   34612 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:32.640233   34612 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:32.640243   34612 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:32.640258   34612 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:32.640601   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.640658   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.656402   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0827 22:29:32.656873   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.657378   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.657404   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.657788   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.658002   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:32.659741   34612 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:32.659758   34612 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:32.660050   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.660083   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.674924   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0827 22:29:32.675334   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.675835   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.675858   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.676183   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.676405   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:32.679425   34612 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:32.679915   34612 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:32.679940   34612 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:32.680110   34612 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:32.680442   34612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:32.680497   34612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:32.695834   34612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43603
	I0827 22:29:32.696299   34612 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:32.696840   34612 main.go:141] libmachine: Using API Version  1
	I0827 22:29:32.696865   34612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:32.697161   34612 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:32.697325   34612 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:32.697496   34612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:32.697514   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:32.700022   34612 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:32.700420   34612 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:32.700452   34612 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:32.700622   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:32.700782   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:32.700923   34612 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:32.701049   34612 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:32.779475   34612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:32.794322   34612 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (3.717123541s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:37.060294   34713 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:37.060452   34713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:37.060480   34713 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:37.060488   34713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:37.060679   34713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:37.060832   34713 out.go:352] Setting JSON to false
	I0827 22:29:37.060856   34713 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:37.060904   34713 notify.go:220] Checking for updates...
	I0827 22:29:37.061353   34713 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:37.061374   34713 status.go:255] checking status of ha-158602 ...
	I0827 22:29:37.061830   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.061900   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.077001   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0827 22:29:37.077415   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.078013   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.078037   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.078370   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.078564   34713 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:37.080384   34713 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:37.080399   34713 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:37.080750   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.080794   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.095527   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0827 22:29:37.095902   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.096399   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.096431   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.096778   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.097008   34713 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:37.099816   34713 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:37.100277   34713 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:37.100299   34713 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:37.100438   34713 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:37.100754   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.100786   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.116133   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0827 22:29:37.116596   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.117079   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.117096   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.117555   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.117767   34713 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:37.117948   34713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:37.117976   34713 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:37.120577   34713 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:37.121017   34713 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:37.121045   34713 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:37.121213   34713 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:37.121411   34713 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:37.121574   34713 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:37.121716   34713 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:37.204220   34713 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:37.211250   34713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:37.227409   34713 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:37.227438   34713 api_server.go:166] Checking apiserver status ...
	I0827 22:29:37.227471   34713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:37.242072   34713 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:37.253472   34713 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:37.253521   34713 ssh_runner.go:195] Run: ls
	I0827 22:29:37.258113   34713 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:37.262951   34713 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:37.262974   34713 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:37.262984   34713 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:37.263000   34713 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:37.263279   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.263312   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.278142   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46417
	I0827 22:29:37.278555   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.279001   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.279019   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.279343   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.279545   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:37.281302   34713 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:29:37.281321   34713 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:37.281725   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.281771   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.296689   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I0827 22:29:37.297101   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.297597   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.297624   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.297981   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.298201   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:29:37.300952   34713 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:37.301424   34713 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:37.301455   34713 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:37.301609   34713 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:29:37.301892   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:37.301931   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:37.317228   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
	I0827 22:29:37.317665   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:37.318118   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:37.318137   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:37.318506   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:37.318751   34713 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:29:37.318946   34713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:37.318964   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:29:37.321981   34713 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:37.322432   34713 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:29:37.322457   34713 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:29:37.322592   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:29:37.322776   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:29:37.322922   34713 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:29:37.323059   34713 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	W0827 22:29:40.392687   34713 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.142:22: connect: no route to host
	W0827 22:29:40.392776   34713 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0827 22:29:40.392799   34713 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:40.392812   34713 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:29:40.392827   34713 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	I0827 22:29:40.392835   34713 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:40.393183   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.393230   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.409532   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I0827 22:29:40.410007   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.410477   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.410505   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.410813   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.410971   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:40.412493   34713 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:40.412509   34713 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:40.412799   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.412832   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.427596   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0827 22:29:40.428014   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.428518   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.428538   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.428838   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.429039   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:40.431870   34713 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:40.432437   34713 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:40.432494   34713 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:40.432524   34713 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:40.432828   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.432897   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.448644   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
	I0827 22:29:40.449046   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.449460   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.449480   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.449829   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.450018   34713 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:40.450238   34713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:40.450261   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:40.453046   34713 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:40.453530   34713 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:40.453549   34713 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:40.453739   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:40.453902   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:40.454035   34713 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:40.454144   34713 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:40.531866   34713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:40.548067   34713 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:40.548091   34713 api_server.go:166] Checking apiserver status ...
	I0827 22:29:40.548118   34713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:40.562940   34713 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:40.572765   34713 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:40.572813   34713 ssh_runner.go:195] Run: ls
	I0827 22:29:40.576715   34713 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:40.582921   34713 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:40.582953   34713 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:40.582964   34713 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:40.582979   34713 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:40.583385   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.583453   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.598582   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43121
	I0827 22:29:40.598953   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.599476   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.599502   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.599860   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.600055   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:40.601660   34713 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:40.601676   34713 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:40.602102   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.602147   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.616805   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33597
	I0827 22:29:40.617204   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.617649   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.617673   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.617947   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.618130   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:40.621013   34713 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:40.621415   34713 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:40.621448   34713 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:40.621575   34713 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:40.621882   34713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:40.621924   34713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:40.636486   34713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0827 22:29:40.636910   34713 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:40.637395   34713 main.go:141] libmachine: Using API Version  1
	I0827 22:29:40.637414   34713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:40.637764   34713 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:40.637940   34713 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:40.638127   34713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:40.638145   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:40.640894   34713 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:40.641260   34713 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:40.641286   34713 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:40.641471   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:40.641631   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:40.641781   34713 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:40.641924   34713 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:40.719351   34713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:40.733786   34713 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 7 (601.080893ms)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-158602-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:48.438697   34866 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:48.438823   34866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:48.438834   34866 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:48.438840   34866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:48.439042   34866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:48.439237   34866 out.go:352] Setting JSON to false
	I0827 22:29:48.439263   34866 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:48.439335   34866 notify.go:220] Checking for updates...
	I0827 22:29:48.439693   34866 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:48.439712   34866 status.go:255] checking status of ha-158602 ...
	I0827 22:29:48.440182   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.440252   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.458629   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0827 22:29:48.459129   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.459748   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.459777   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.460075   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.460265   34866 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:29:48.462632   34866 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:29:48.462654   34866 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:48.463068   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.463108   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.478141   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0827 22:29:48.478586   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.479036   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.479055   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.479369   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.479566   34866 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:29:48.482527   34866 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:48.482992   34866 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:48.483019   34866 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:48.483172   34866 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:29:48.483603   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.483655   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.499800   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I0827 22:29:48.500219   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.500751   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.500774   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.501131   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.501306   34866 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:29:48.501503   34866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:48.501529   34866 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:29:48.504546   34866 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:48.504974   34866 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:29:48.505009   34866 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:29:48.505102   34866 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:29:48.505286   34866 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:29:48.505429   34866 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:29:48.505605   34866 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:29:48.589428   34866 ssh_runner.go:195] Run: systemctl --version
	I0827 22:29:48.595423   34866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:48.609940   34866 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:48.609977   34866 api_server.go:166] Checking apiserver status ...
	I0827 22:29:48.610029   34866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:48.623446   34866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup
	W0827 22:29:48.632360   34866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:48.632415   34866 ssh_runner.go:195] Run: ls
	I0827 22:29:48.636638   34866 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:48.640421   34866 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:48.640439   34866 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:29:48.640448   34866 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:48.640491   34866 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:29:48.640778   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.640809   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.656524   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0827 22:29:48.656916   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.657413   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.657442   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.657823   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.657992   34866 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:29:48.659464   34866 status.go:330] ha-158602-m02 host status = "Stopped" (err=<nil>)
	I0827 22:29:48.659479   34866 status.go:343] host is not running, skipping remaining checks
	I0827 22:29:48.659487   34866 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:48.659520   34866 status.go:255] checking status of ha-158602-m03 ...
	I0827 22:29:48.659797   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.659835   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.675800   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0827 22:29:48.676200   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.676718   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.676737   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.677068   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.677227   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:48.678504   34866 status.go:330] ha-158602-m03 host status = "Running" (err=<nil>)
	I0827 22:29:48.678520   34866 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:48.678802   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.678837   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.693617   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0827 22:29:48.693984   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.694401   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.694426   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.694697   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.694865   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:29:48.697308   34866 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:48.697746   34866 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:48.697792   34866 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:48.697909   34866 host.go:66] Checking if "ha-158602-m03" exists ...
	I0827 22:29:48.698196   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.698229   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.713496   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0827 22:29:48.713858   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.714350   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.714374   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.714634   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.714833   34866 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:48.715030   34866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:48.715053   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:48.717485   34866 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:48.717920   34866 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:48.717954   34866 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:48.718134   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:48.718303   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:48.718431   34866 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:48.718520   34866 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:48.795417   34866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:48.809526   34866 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:29:48.809549   34866 api_server.go:166] Checking apiserver status ...
	I0827 22:29:48.809593   34866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:29:48.823312   34866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0827 22:29:48.832852   34866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:29:48.832917   34866 ssh_runner.go:195] Run: ls
	I0827 22:29:48.836748   34866 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:29:48.841876   34866 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:29:48.841904   34866 status.go:422] ha-158602-m03 apiserver status = Running (err=<nil>)
	I0827 22:29:48.841914   34866 status.go:257] ha-158602-m03 status: &{Name:ha-158602-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:29:48.841932   34866 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:29:48.842330   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.842368   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.857241   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0827 22:29:48.857595   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.858068   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.858091   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.858378   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.858553   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:48.860049   34866 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:29:48.860064   34866 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:48.860438   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.860496   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.875457   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
	I0827 22:29:48.875858   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.876256   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.876274   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.876628   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.876816   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:29:48.879522   34866 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:48.879910   34866 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:48.879939   34866 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:48.880065   34866 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:29:48.880365   34866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:48.880430   34866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:48.895375   34866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0827 22:29:48.895804   34866 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:48.896228   34866 main.go:141] libmachine: Using API Version  1
	I0827 22:29:48.896249   34866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:48.896570   34866 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:48.896880   34866 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:48.897092   34866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:29:48.897111   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:48.900148   34866 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:48.900655   34866 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:48.900679   34866 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:48.900833   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:48.901010   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:48.901152   34866 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:48.901310   34866 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:48.983287   34866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:29:48.997905   34866 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-158602 -n ha-158602
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-158602 logs -n 25: (1.303053373s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m03_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m04 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp testdata/cp-test.txt                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m04_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03:/home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m03 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-158602 node stop m02 -v=7                                                     | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-158602 node start m02 -v=7                                                    | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:22:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:22:05.725091   29384 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:22:05.725198   29384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:22:05.725207   29384 out.go:358] Setting ErrFile to fd 2...
	I0827 22:22:05.725211   29384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:22:05.725395   29384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:22:05.725951   29384 out.go:352] Setting JSON to false
	I0827 22:22:05.726785   29384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3873,"bootTime":1724793453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:22:05.726843   29384 start.go:139] virtualization: kvm guest
	I0827 22:22:05.728938   29384 out.go:177] * [ha-158602] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:22:05.730144   29384 notify.go:220] Checking for updates...
	I0827 22:22:05.730158   29384 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:22:05.731229   29384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:22:05.732370   29384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:05.733494   29384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:05.734563   29384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:22:05.735662   29384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:22:05.736957   29384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:22:05.770377   29384 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 22:22:05.771555   29384 start.go:297] selected driver: kvm2
	I0827 22:22:05.771570   29384 start.go:901] validating driver "kvm2" against <nil>
	I0827 22:22:05.771585   29384 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:22:05.772234   29384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:22:05.772301   29384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:22:05.786773   29384 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:22:05.786811   29384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 22:22:05.787000   29384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:22:05.787063   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:05.787074   29384 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0827 22:22:05.787080   29384 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 22:22:05.787126   29384 start.go:340] cluster config:
	{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0827 22:22:05.787229   29384 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:22:05.788962   29384 out.go:177] * Starting "ha-158602" primary control-plane node in "ha-158602" cluster
	I0827 22:22:05.790185   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:05.790216   29384 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:22:05.790227   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:22:05.790298   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:22:05.790308   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:22:05.790581   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:05.790598   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json: {Name:mkfa8fe80ca5d9f0499f17034da7769023bc4dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:05.790717   29384 start.go:360] acquireMachinesLock for ha-158602: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:22:05.790744   29384 start.go:364] duration metric: took 14.385µs to acquireMachinesLock for "ha-158602"
	I0827 22:22:05.790759   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:05.790813   29384 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 22:22:05.792317   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:22:05.792451   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:05.792505   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:05.806240   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0827 22:22:05.806635   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:05.807149   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:05.807199   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:05.807494   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:05.807666   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:05.807803   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:05.807933   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:22:05.807959   29384 client.go:168] LocalClient.Create starting
	I0827 22:22:05.807993   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:22:05.808031   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:05.808049   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:05.808110   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:22:05.808137   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:05.808154   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:05.808177   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:22:05.808195   29384 main.go:141] libmachine: (ha-158602) Calling .PreCreateCheck
	I0827 22:22:05.808508   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:05.808908   29384 main.go:141] libmachine: Creating machine...
	I0827 22:22:05.808923   29384 main.go:141] libmachine: (ha-158602) Calling .Create
	I0827 22:22:05.809055   29384 main.go:141] libmachine: (ha-158602) Creating KVM machine...
	I0827 22:22:05.810075   29384 main.go:141] libmachine: (ha-158602) DBG | found existing default KVM network
	I0827 22:22:05.810681   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:05.810546   29407 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0827 22:22:05.810700   29384 main.go:141] libmachine: (ha-158602) DBG | created network xml: 
	I0827 22:22:05.810711   29384 main.go:141] libmachine: (ha-158602) DBG | <network>
	I0827 22:22:05.810726   29384 main.go:141] libmachine: (ha-158602) DBG |   <name>mk-ha-158602</name>
	I0827 22:22:05.810737   29384 main.go:141] libmachine: (ha-158602) DBG |   <dns enable='no'/>
	I0827 22:22:05.810749   29384 main.go:141] libmachine: (ha-158602) DBG |   
	I0827 22:22:05.810764   29384 main.go:141] libmachine: (ha-158602) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0827 22:22:05.810773   29384 main.go:141] libmachine: (ha-158602) DBG |     <dhcp>
	I0827 22:22:05.810788   29384 main.go:141] libmachine: (ha-158602) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0827 22:22:05.810795   29384 main.go:141] libmachine: (ha-158602) DBG |     </dhcp>
	I0827 22:22:05.810802   29384 main.go:141] libmachine: (ha-158602) DBG |   </ip>
	I0827 22:22:05.810810   29384 main.go:141] libmachine: (ha-158602) DBG |   
	I0827 22:22:05.810818   29384 main.go:141] libmachine: (ha-158602) DBG | </network>
	I0827 22:22:05.810831   29384 main.go:141] libmachine: (ha-158602) DBG | 
	I0827 22:22:05.815706   29384 main.go:141] libmachine: (ha-158602) DBG | trying to create private KVM network mk-ha-158602 192.168.39.0/24...
	I0827 22:22:05.877509   29384 main.go:141] libmachine: (ha-158602) DBG | private KVM network mk-ha-158602 192.168.39.0/24 created
	I0827 22:22:05.877546   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:05.877474   29407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:05.877558   29384 main.go:141] libmachine: (ha-158602) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 ...
	I0827 22:22:05.877582   29384 main.go:141] libmachine: (ha-158602) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:22:05.877629   29384 main.go:141] libmachine: (ha-158602) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:22:06.119558   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.119445   29407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa...
	I0827 22:22:06.271755   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.271633   29407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/ha-158602.rawdisk...
	I0827 22:22:06.271777   29384 main.go:141] libmachine: (ha-158602) DBG | Writing magic tar header
	I0827 22:22:06.271787   29384 main.go:141] libmachine: (ha-158602) DBG | Writing SSH key tar header
	I0827 22:22:06.271795   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:06.271742   29407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 ...
	I0827 22:22:06.271865   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602 (perms=drwx------)
	I0827 22:22:06.271876   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:22:06.271902   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:22:06.271922   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:22:06.271932   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602
	I0827 22:22:06.271940   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:22:06.271949   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:06.271956   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:22:06.271971   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:22:06.271982   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:22:06.271990   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:22:06.271997   29384 main.go:141] libmachine: (ha-158602) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:22:06.272005   29384 main.go:141] libmachine: (ha-158602) DBG | Checking permissions on dir: /home
	I0827 22:22:06.272012   29384 main.go:141] libmachine: (ha-158602) DBG | Skipping /home - not owner
	I0827 22:22:06.272020   29384 main.go:141] libmachine: (ha-158602) Creating domain...
	I0827 22:22:06.273037   29384 main.go:141] libmachine: (ha-158602) define libvirt domain using xml: 
	I0827 22:22:06.273062   29384 main.go:141] libmachine: (ha-158602) <domain type='kvm'>
	I0827 22:22:06.273073   29384 main.go:141] libmachine: (ha-158602)   <name>ha-158602</name>
	I0827 22:22:06.273085   29384 main.go:141] libmachine: (ha-158602)   <memory unit='MiB'>2200</memory>
	I0827 22:22:06.273105   29384 main.go:141] libmachine: (ha-158602)   <vcpu>2</vcpu>
	I0827 22:22:06.273123   29384 main.go:141] libmachine: (ha-158602)   <features>
	I0827 22:22:06.273130   29384 main.go:141] libmachine: (ha-158602)     <acpi/>
	I0827 22:22:06.273137   29384 main.go:141] libmachine: (ha-158602)     <apic/>
	I0827 22:22:06.273145   29384 main.go:141] libmachine: (ha-158602)     <pae/>
	I0827 22:22:06.273158   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273168   29384 main.go:141] libmachine: (ha-158602)   </features>
	I0827 22:22:06.273176   29384 main.go:141] libmachine: (ha-158602)   <cpu mode='host-passthrough'>
	I0827 22:22:06.273189   29384 main.go:141] libmachine: (ha-158602)   
	I0827 22:22:06.273196   29384 main.go:141] libmachine: (ha-158602)   </cpu>
	I0827 22:22:06.273250   29384 main.go:141] libmachine: (ha-158602)   <os>
	I0827 22:22:06.273273   29384 main.go:141] libmachine: (ha-158602)     <type>hvm</type>
	I0827 22:22:06.273283   29384 main.go:141] libmachine: (ha-158602)     <boot dev='cdrom'/>
	I0827 22:22:06.273295   29384 main.go:141] libmachine: (ha-158602)     <boot dev='hd'/>
	I0827 22:22:06.273306   29384 main.go:141] libmachine: (ha-158602)     <bootmenu enable='no'/>
	I0827 22:22:06.273315   29384 main.go:141] libmachine: (ha-158602)   </os>
	I0827 22:22:06.273323   29384 main.go:141] libmachine: (ha-158602)   <devices>
	I0827 22:22:06.273331   29384 main.go:141] libmachine: (ha-158602)     <disk type='file' device='cdrom'>
	I0827 22:22:06.273341   29384 main.go:141] libmachine: (ha-158602)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/boot2docker.iso'/>
	I0827 22:22:06.273356   29384 main.go:141] libmachine: (ha-158602)       <target dev='hdc' bus='scsi'/>
	I0827 22:22:06.273389   29384 main.go:141] libmachine: (ha-158602)       <readonly/>
	I0827 22:22:06.273408   29384 main.go:141] libmachine: (ha-158602)     </disk>
	I0827 22:22:06.273422   29384 main.go:141] libmachine: (ha-158602)     <disk type='file' device='disk'>
	I0827 22:22:06.273435   29384 main.go:141] libmachine: (ha-158602)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:22:06.273452   29384 main.go:141] libmachine: (ha-158602)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/ha-158602.rawdisk'/>
	I0827 22:22:06.273464   29384 main.go:141] libmachine: (ha-158602)       <target dev='hda' bus='virtio'/>
	I0827 22:22:06.273474   29384 main.go:141] libmachine: (ha-158602)     </disk>
	I0827 22:22:06.273485   29384 main.go:141] libmachine: (ha-158602)     <interface type='network'>
	I0827 22:22:06.273497   29384 main.go:141] libmachine: (ha-158602)       <source network='mk-ha-158602'/>
	I0827 22:22:06.273510   29384 main.go:141] libmachine: (ha-158602)       <model type='virtio'/>
	I0827 22:22:06.273521   29384 main.go:141] libmachine: (ha-158602)     </interface>
	I0827 22:22:06.273533   29384 main.go:141] libmachine: (ha-158602)     <interface type='network'>
	I0827 22:22:06.273542   29384 main.go:141] libmachine: (ha-158602)       <source network='default'/>
	I0827 22:22:06.273554   29384 main.go:141] libmachine: (ha-158602)       <model type='virtio'/>
	I0827 22:22:06.273576   29384 main.go:141] libmachine: (ha-158602)     </interface>
	I0827 22:22:06.273592   29384 main.go:141] libmachine: (ha-158602)     <serial type='pty'>
	I0827 22:22:06.273602   29384 main.go:141] libmachine: (ha-158602)       <target port='0'/>
	I0827 22:22:06.273608   29384 main.go:141] libmachine: (ha-158602)     </serial>
	I0827 22:22:06.273616   29384 main.go:141] libmachine: (ha-158602)     <console type='pty'>
	I0827 22:22:06.273629   29384 main.go:141] libmachine: (ha-158602)       <target type='serial' port='0'/>
	I0827 22:22:06.273640   29384 main.go:141] libmachine: (ha-158602)     </console>
	I0827 22:22:06.273653   29384 main.go:141] libmachine: (ha-158602)     <rng model='virtio'>
	I0827 22:22:06.273665   29384 main.go:141] libmachine: (ha-158602)       <backend model='random'>/dev/random</backend>
	I0827 22:22:06.273682   29384 main.go:141] libmachine: (ha-158602)     </rng>
	I0827 22:22:06.273711   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273723   29384 main.go:141] libmachine: (ha-158602)     
	I0827 22:22:06.273730   29384 main.go:141] libmachine: (ha-158602)   </devices>
	I0827 22:22:06.273740   29384 main.go:141] libmachine: (ha-158602) </domain>
	I0827 22:22:06.273748   29384 main.go:141] libmachine: (ha-158602) 
	I0827 22:22:06.277981   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:88:a1:82 in network default
	I0827 22:22:06.278502   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:06.278514   29384 main.go:141] libmachine: (ha-158602) Ensuring networks are active...
	I0827 22:22:06.279216   29384 main.go:141] libmachine: (ha-158602) Ensuring network default is active
	I0827 22:22:06.279594   29384 main.go:141] libmachine: (ha-158602) Ensuring network mk-ha-158602 is active
	I0827 22:22:06.280161   29384 main.go:141] libmachine: (ha-158602) Getting domain xml...
	I0827 22:22:06.280932   29384 main.go:141] libmachine: (ha-158602) Creating domain...
	I0827 22:22:07.467089   29384 main.go:141] libmachine: (ha-158602) Waiting to get IP...
	I0827 22:22:07.467844   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.468192   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.468236   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.468189   29407 retry.go:31] will retry after 194.265732ms: waiting for machine to come up
	I0827 22:22:07.663504   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.663919   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.663939   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.663867   29407 retry.go:31] will retry after 270.765071ms: waiting for machine to come up
	I0827 22:22:07.937608   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:07.938086   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:07.938109   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:07.938043   29407 retry.go:31] will retry after 339.340195ms: waiting for machine to come up
	I0827 22:22:08.278496   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:08.278863   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:08.278880   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:08.278827   29407 retry.go:31] will retry after 514.863902ms: waiting for machine to come up
	I0827 22:22:08.795484   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:08.795916   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:08.795944   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:08.795873   29407 retry.go:31] will retry after 630.596256ms: waiting for machine to come up
	I0827 22:22:09.427625   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:09.428002   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:09.428027   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:09.427950   29407 retry.go:31] will retry after 906.309617ms: waiting for machine to come up
	I0827 22:22:10.336015   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:10.336420   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:10.336513   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:10.336396   29407 retry.go:31] will retry after 810.130306ms: waiting for machine to come up
	I0827 22:22:11.147751   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:11.148358   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:11.148404   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:11.148325   29407 retry.go:31] will retry after 1.037475417s: waiting for machine to come up
	I0827 22:22:12.187573   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:12.188125   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:12.188164   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:12.187954   29407 retry.go:31] will retry after 1.741861845s: waiting for machine to come up
	I0827 22:22:13.931937   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:13.932385   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:13.932415   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:13.932334   29407 retry.go:31] will retry after 2.17941581s: waiting for machine to come up
	I0827 22:22:16.113939   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:16.114420   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:16.114449   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:16.114352   29407 retry.go:31] will retry after 2.318053422s: waiting for machine to come up
	I0827 22:22:18.435855   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:18.436172   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:18.436193   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:18.436133   29407 retry.go:31] will retry after 2.715139833s: waiting for machine to come up
	I0827 22:22:21.152530   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:21.152930   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:21.152959   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:21.152883   29407 retry.go:31] will retry after 3.047166733s: waiting for machine to come up
	I0827 22:22:24.203998   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:24.204352   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find current IP address of domain ha-158602 in network mk-ha-158602
	I0827 22:22:24.204375   29384 main.go:141] libmachine: (ha-158602) DBG | I0827 22:22:24.204336   29407 retry.go:31] will retry after 4.148204433s: waiting for machine to come up
	I0827 22:22:28.355563   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.355978   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has current primary IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.356003   29384 main.go:141] libmachine: (ha-158602) Found IP for machine: 192.168.39.77
	I0827 22:22:28.356016   29384 main.go:141] libmachine: (ha-158602) Reserving static IP address...
	I0827 22:22:28.356292   29384 main.go:141] libmachine: (ha-158602) DBG | unable to find host DHCP lease matching {name: "ha-158602", mac: "52:54:00:25:de:6a", ip: "192.168.39.77"} in network mk-ha-158602
	I0827 22:22:28.428664   29384 main.go:141] libmachine: (ha-158602) Reserved static IP address: 192.168.39.77
	I0827 22:22:28.428689   29384 main.go:141] libmachine: (ha-158602) Waiting for SSH to be available...
	I0827 22:22:28.428699   29384 main.go:141] libmachine: (ha-158602) DBG | Getting to WaitForSSH function...
	I0827 22:22:28.431057   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.431485   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.431516   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.431603   29384 main.go:141] libmachine: (ha-158602) DBG | Using SSH client type: external
	I0827 22:22:28.431638   29384 main.go:141] libmachine: (ha-158602) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa (-rw-------)
	I0827 22:22:28.431679   29384 main.go:141] libmachine: (ha-158602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:22:28.431696   29384 main.go:141] libmachine: (ha-158602) DBG | About to run SSH command:
	I0827 22:22:28.431712   29384 main.go:141] libmachine: (ha-158602) DBG | exit 0
	I0827 22:22:28.560450   29384 main.go:141] libmachine: (ha-158602) DBG | SSH cmd err, output: <nil>: 
	I0827 22:22:28.560805   29384 main.go:141] libmachine: (ha-158602) KVM machine creation complete!
	I0827 22:22:28.561127   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:28.561629   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:28.561874   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:28.562017   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:22:28.562034   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:28.563480   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:22:28.563494   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:22:28.563500   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:22:28.563506   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.565826   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.566247   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.566267   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.566440   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.566680   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.566852   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.567031   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.567196   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.567381   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.567394   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:22:28.675712   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:22:28.675738   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:22:28.675749   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.678641   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.679008   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.679039   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.679207   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.679414   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.679587   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.679800   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.679980   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.680216   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.680232   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:22:28.792985   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:22:28.793063   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:22:28.793076   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:22:28.793084   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:28.793353   29384 buildroot.go:166] provisioning hostname "ha-158602"
	I0827 22:22:28.793377   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:28.793549   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.796260   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.796636   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.796663   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.796788   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.796977   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.797137   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.797243   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.797430   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.797634   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.797653   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602 && echo "ha-158602" | sudo tee /etc/hostname
	I0827 22:22:28.923109   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:22:28.923134   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:28.926153   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.926503   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:28.926530   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:28.926699   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:28.926955   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.927131   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:28.927366   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:28.927515   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:28.927700   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:28.927716   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:22:29.048227   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:22:29.048253   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:22:29.048285   29384 buildroot.go:174] setting up certificates
	I0827 22:22:29.048294   29384 provision.go:84] configureAuth start
	I0827 22:22:29.048302   29384 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:22:29.048596   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.051241   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.051578   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.051603   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.051768   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.054036   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.054563   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.054599   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.054709   29384 provision.go:143] copyHostCerts
	I0827 22:22:29.054733   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:22:29.054764   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:22:29.054780   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:22:29.054850   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:22:29.054937   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:22:29.054960   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:22:29.054970   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:22:29.054995   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:22:29.055073   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:22:29.055105   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:22:29.055115   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:22:29.055152   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:22:29.055222   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602 san=[127.0.0.1 192.168.39.77 ha-158602 localhost minikube]
	I0827 22:22:29.154469   29384 provision.go:177] copyRemoteCerts
	I0827 22:22:29.154522   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:22:29.154543   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.157356   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.157674   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.157696   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.157930   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.158112   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.158233   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.158366   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.242578   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:22:29.242638   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:22:29.265734   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:22:29.265816   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0827 22:22:29.288756   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:22:29.288828   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 22:22:29.311517   29384 provision.go:87] duration metric: took 263.210733ms to configureAuth
	I0827 22:22:29.311550   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:22:29.311770   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:29.311849   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.314644   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.314971   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.314997   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.315171   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.315372   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.315507   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.315617   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.315781   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:29.315958   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:29.315979   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:22:29.546170   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:22:29.546198   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:22:29.546209   29384 main.go:141] libmachine: (ha-158602) Calling .GetURL
	I0827 22:22:29.547671   29384 main.go:141] libmachine: (ha-158602) DBG | Using libvirt version 6000000
	I0827 22:22:29.549750   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.550027   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.550056   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.550171   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:22:29.550182   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:22:29.550197   29384 client.go:171] duration metric: took 23.742230676s to LocalClient.Create
	I0827 22:22:29.550222   29384 start.go:167] duration metric: took 23.742288109s to libmachine.API.Create "ha-158602"
	I0827 22:22:29.550231   29384 start.go:293] postStartSetup for "ha-158602" (driver="kvm2")
	I0827 22:22:29.550244   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:22:29.550264   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.550577   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:22:29.550600   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.552753   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.553090   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.553118   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.553195   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.553448   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.553620   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.553773   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.638774   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:22:29.642778   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:22:29.642806   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:22:29.642897   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:22:29.643004   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:22:29.643017   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:22:29.643159   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:22:29.652334   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:22:29.675090   29384 start.go:296] duration metric: took 124.845065ms for postStartSetup
	I0827 22:22:29.675136   29384 main.go:141] libmachine: (ha-158602) Calling .GetConfigRaw
	I0827 22:22:29.675736   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.678241   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.678633   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.678660   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.678878   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:29.679066   29384 start.go:128] duration metric: took 23.888243916s to createHost
	I0827 22:22:29.679089   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.681377   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.681691   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.681716   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.681802   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.681977   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.682107   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.682257   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.682399   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:22:29.682549   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:22:29.682569   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:22:29.792862   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797349.771722306
	
	I0827 22:22:29.792895   29384 fix.go:216] guest clock: 1724797349.771722306
	I0827 22:22:29.792908   29384 fix.go:229] Guest: 2024-08-27 22:22:29.771722306 +0000 UTC Remote: 2024-08-27 22:22:29.679078204 +0000 UTC m=+23.987252558 (delta=92.644102ms)
	I0827 22:22:29.792938   29384 fix.go:200] guest clock delta is within tolerance: 92.644102ms
	I0827 22:22:29.792947   29384 start.go:83] releasing machines lock for "ha-158602", held for 24.00219403s
	I0827 22:22:29.792977   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.793232   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:29.795836   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.796182   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.796208   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.796387   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.796865   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.797060   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:29.797174   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:22:29.797220   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.797265   29384 ssh_runner.go:195] Run: cat /version.json
	I0827 22:22:29.797281   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:29.799931   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.799949   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800228   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.800285   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:29.800307   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800330   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:29.800475   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.800620   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:29.800694   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.800786   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:29.800855   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.800912   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:29.800966   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.801020   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:29.913602   29384 ssh_runner.go:195] Run: systemctl --version
	I0827 22:22:29.919590   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:22:30.074495   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:22:30.079886   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:22:30.079939   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:22:30.094396   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:22:30.094422   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:22:30.094496   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:22:30.109029   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:22:30.122546   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:22:30.122642   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:22:30.136969   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:22:30.150178   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:22:30.259147   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:22:30.423027   29384 docker.go:233] disabling docker service ...
	I0827 22:22:30.423085   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:22:30.436430   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:22:30.448753   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:22:30.577789   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:22:30.700754   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:22:30.713801   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:22:30.732850   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:22:30.732912   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.744177   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:22:30.744243   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.755285   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.766141   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.777285   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:22:30.788436   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.799321   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.816622   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:22:30.827742   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:22:30.837836   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:22:30.837887   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:22:30.851354   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:22:30.861051   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:22:30.984839   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:22:31.074940   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:22:31.075021   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:22:31.079425   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:22:31.079483   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:22:31.083100   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:22:31.120413   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:22:31.120509   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:22:31.148060   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:22:31.175246   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:22:31.176367   29384 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:22:31.178721   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:31.179040   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:31.179066   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:31.179249   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:22:31.182922   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:22:31.195106   29384 kubeadm.go:883] updating cluster {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:22:31.195214   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:31.195263   29384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:22:31.226670   29384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0827 22:22:31.226758   29384 ssh_runner.go:195] Run: which lz4
	I0827 22:22:31.230524   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0827 22:22:31.230640   29384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 22:22:31.234392   29384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 22:22:31.234422   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0827 22:22:32.368661   29384 crio.go:462] duration metric: took 1.13806452s to copy over tarball
	I0827 22:22:32.368736   29384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 22:22:34.354238   29384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.985475866s)
	I0827 22:22:34.354264   29384 crio.go:469] duration metric: took 1.985575846s to extract the tarball
	I0827 22:22:34.354270   29384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 22:22:34.390079   29384 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:22:34.433362   29384 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:22:34.433387   29384 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:22:34.433397   29384 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.0 crio true true} ...
	I0827 22:22:34.433533   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:22:34.433623   29384 ssh_runner.go:195] Run: crio config
	I0827 22:22:34.477896   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:34.477915   29384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 22:22:34.477924   29384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:22:34.477943   29384 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-158602 NodeName:ha-158602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:22:34.478089   29384 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-158602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:22:34.478113   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:22:34.478157   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:22:34.493162   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:22:34.493306   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:22:34.493383   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:22:34.503023   29384 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:22:34.503082   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0827 22:22:34.512199   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0827 22:22:34.527196   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:22:34.541542   29384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0827 22:22:34.556176   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0827 22:22:34.573020   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:22:34.576515   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:22:34.587435   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:22:34.701038   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:22:34.716711   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.77
	I0827 22:22:34.716737   29384 certs.go:194] generating shared ca certs ...
	I0827 22:22:34.716757   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.716937   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:22:34.716984   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:22:34.716997   29384 certs.go:256] generating profile certs ...
	I0827 22:22:34.717046   29384 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:22:34.717072   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt with IP's: []
	I0827 22:22:34.818879   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt ...
	I0827 22:22:34.818905   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt: {Name:mkdf45df5f65fbc406507ea6a9494233f6ccc139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.819088   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key ...
	I0827 22:22:34.819101   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key: {Name:mka5ce0f67af3ce4732ca247b43e3fa8d39f7d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.819193   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a
	I0827 22:22:34.819217   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.254]
	I0827 22:22:34.864751   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a ...
	I0827 22:22:34.864777   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a: {Name:mkf15c8892d9da701cae3227207b1e68ca1f0830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.864921   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a ...
	I0827 22:22:34.864933   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a: {Name:mkaadc67dd86d52629334b484281a2a6fe7c5760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.865003   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.32092e0a -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:22:34.865081   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.32092e0a -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:22:34.865134   29384 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:22:34.865149   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt with IP's: []
	I0827 22:22:34.922123   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt ...
	I0827 22:22:34.922151   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt: {Name:mk7b73460f10a4c6e6831b9d583235ac67597a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.922296   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key ...
	I0827 22:22:34.922306   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key: {Name:mk0221e48cfc3cc05f388732951062f16a100d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:34.922377   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:22:34.922393   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:22:34.922403   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:22:34.922416   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:22:34.922426   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:22:34.922440   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:22:34.922453   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:22:34.922466   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:22:34.922508   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:22:34.922539   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:22:34.922547   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:22:34.922569   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:22:34.922600   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:22:34.922622   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:22:34.922658   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:22:34.922679   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:22:34.922689   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:22:34.922699   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:34.923196   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:22:34.947282   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:22:34.969654   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:22:34.991949   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:22:35.014109   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0827 22:22:35.036792   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:22:35.058733   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:22:35.080663   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:22:35.102279   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:22:35.124796   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:22:35.145735   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:22:35.166753   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:22:35.182273   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:22:35.187454   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:22:35.196900   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.200799   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.200842   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:22:35.206018   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:22:35.215519   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:22:35.225022   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.229043   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.229097   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:22:35.234398   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:22:35.243911   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:22:35.253612   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.257500   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.257543   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:22:35.262614   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:22:35.272027   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:22:35.275533   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:22:35.275603   29384 kubeadm.go:392] StartCluster: {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:22:35.275674   29384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:22:35.275735   29384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:22:35.309881   29384 cri.go:89] found id: ""
	I0827 22:22:35.309954   29384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 22:22:35.318972   29384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 22:22:35.327616   29384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 22:22:35.336267   29384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 22:22:35.336288   29384 kubeadm.go:157] found existing configuration files:
	
	I0827 22:22:35.336328   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 22:22:35.344725   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 22:22:35.344785   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 22:22:35.353189   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 22:22:35.361747   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 22:22:35.361797   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 22:22:35.370528   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 22:22:35.379013   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 22:22:35.379059   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 22:22:35.388154   29384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 22:22:35.396693   29384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 22:22:35.396747   29384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 22:22:35.405660   29384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 22:22:35.510317   29384 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0827 22:22:35.510446   29384 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 22:22:35.625850   29384 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 22:22:35.626003   29384 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 22:22:35.626109   29384 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0827 22:22:35.636040   29384 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 22:22:35.638828   29384 out.go:235]   - Generating certificates and keys ...
	I0827 22:22:35.638931   29384 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 22:22:35.639011   29384 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 22:22:35.765494   29384 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 22:22:35.847870   29384 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 22:22:35.951048   29384 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 22:22:36.106009   29384 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 22:22:36.255065   29384 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 22:22:36.255236   29384 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-158602 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0827 22:22:36.328842   29384 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 22:22:36.329019   29384 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-158602 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0827 22:22:36.391948   29384 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 22:22:36.486461   29384 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 22:22:36.622616   29384 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 22:22:36.622853   29384 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 22:22:37.182141   29384 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 22:22:37.329148   29384 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0827 22:22:37.487447   29384 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 22:22:37.611584   29384 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 22:22:37.725021   29384 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 22:22:37.725712   29384 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 22:22:37.728853   29384 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 22:22:37.838718   29384 out.go:235]   - Booting up control plane ...
	I0827 22:22:37.838841   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 22:22:37.838942   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 22:22:37.839019   29384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 22:22:37.839141   29384 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 22:22:37.839260   29384 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 22:22:37.839324   29384 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 22:22:37.889251   29384 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0827 22:22:37.889444   29384 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0827 22:22:38.390792   29384 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.677224ms
	I0827 22:22:38.390907   29384 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0827 22:22:44.335789   29384 kubeadm.go:310] [api-check] The API server is healthy after 5.948175854s
	I0827 22:22:44.351540   29384 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 22:22:44.369518   29384 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 22:22:44.904393   29384 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 22:22:44.904686   29384 kubeadm.go:310] [mark-control-plane] Marking the node ha-158602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 22:22:44.917380   29384 kubeadm.go:310] [bootstrap-token] Using token: 1ncx0g.2a6qvzpriwfvodsr
	I0827 22:22:44.918757   29384 out.go:235]   - Configuring RBAC rules ...
	I0827 22:22:44.918916   29384 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 22:22:44.928255   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 22:22:44.939201   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 22:22:44.942667   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 22:22:44.946407   29384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 22:22:44.950174   29384 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 22:22:44.965897   29384 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 22:22:45.211092   29384 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 22:22:45.742594   29384 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 22:22:45.744221   29384 kubeadm.go:310] 
	I0827 22:22:45.744283   29384 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 22:22:45.744291   29384 kubeadm.go:310] 
	I0827 22:22:45.744415   29384 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 22:22:45.744435   29384 kubeadm.go:310] 
	I0827 22:22:45.744478   29384 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 22:22:45.744555   29384 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 22:22:45.744621   29384 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 22:22:45.744634   29384 kubeadm.go:310] 
	I0827 22:22:45.744710   29384 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 22:22:45.744726   29384 kubeadm.go:310] 
	I0827 22:22:45.744797   29384 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 22:22:45.744811   29384 kubeadm.go:310] 
	I0827 22:22:45.744892   29384 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 22:22:45.744987   29384 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 22:22:45.745081   29384 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 22:22:45.745093   29384 kubeadm.go:310] 
	I0827 22:22:45.745207   29384 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 22:22:45.745315   29384 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 22:22:45.745327   29384 kubeadm.go:310] 
	I0827 22:22:45.745437   29384 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1ncx0g.2a6qvzpriwfvodsr \
	I0827 22:22:45.745566   29384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 \
	I0827 22:22:45.745598   29384 kubeadm.go:310] 	--control-plane 
	I0827 22:22:45.745605   29384 kubeadm.go:310] 
	I0827 22:22:45.745692   29384 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 22:22:45.745698   29384 kubeadm.go:310] 
	I0827 22:22:45.745783   29384 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1ncx0g.2a6qvzpriwfvodsr \
	I0827 22:22:45.745915   29384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 
	I0827 22:22:45.747849   29384 kubeadm.go:310] W0827 22:22:35.491816     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:22:45.748216   29384 kubeadm.go:310] W0827 22:22:35.492666     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:22:45.748351   29384 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 22:22:45.748379   29384 cni.go:84] Creating CNI manager for ""
	I0827 22:22:45.748389   29384 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0827 22:22:45.749954   29384 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0827 22:22:45.751246   29384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0827 22:22:45.756717   29384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0827 22:22:45.756735   29384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0827 22:22:45.775957   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0827 22:22:46.130373   29384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 22:22:46.130446   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:46.130474   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602 minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=true
	I0827 22:22:46.326174   29384 ops.go:34] apiserver oom_adj: -16
	I0827 22:22:46.326200   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:46.826245   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:47.326346   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:47.826659   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:48.327250   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:48.826302   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.327122   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.826329   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:22:49.915000   29384 kubeadm.go:1113] duration metric: took 3.78461182s to wait for elevateKubeSystemPrivileges
	I0827 22:22:49.915028   29384 kubeadm.go:394] duration metric: took 14.63943765s to StartCluster
	I0827 22:22:49.915050   29384 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:49.915134   29384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:49.915793   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:22:49.916017   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0827 22:22:49.916028   29384 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 22:22:49.916087   29384 addons.go:69] Setting storage-provisioner=true in profile "ha-158602"
	I0827 22:22:49.916013   29384 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:49.916156   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:22:49.916119   29384 addons.go:69] Setting default-storageclass=true in profile "ha-158602"
	I0827 22:22:49.916211   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:49.916222   29384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-158602"
	I0827 22:22:49.916121   29384 addons.go:234] Setting addon storage-provisioner=true in "ha-158602"
	I0827 22:22:49.916289   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:22:49.916741   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.916778   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.916797   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.916828   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.931837   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0827 22:22:49.931983   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0827 22:22:49.932314   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.932433   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.932860   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.932885   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.932986   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.933006   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.933226   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.933334   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.933499   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.933794   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.933826   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.935547   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:22:49.935885   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 22:22:49.936488   29384 cert_rotation.go:140] Starting client certificate rotation controller
	I0827 22:22:49.936762   29384 addons.go:234] Setting addon default-storageclass=true in "ha-158602"
	I0827 22:22:49.936805   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:22:49.937200   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.937245   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.949831   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0827 22:22:49.950337   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.950827   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.950854   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.951203   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.951420   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.952187   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0827 22:22:49.952660   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.953109   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.953133   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.953268   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:49.953442   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.953888   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:49.953927   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:49.955405   29384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 22:22:49.956859   29384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:22:49.956875   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 22:22:49.956893   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:49.959686   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.960119   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:49.960145   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.960404   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:49.960585   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:49.960737   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:49.960904   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:49.974233   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0827 22:22:49.974626   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:49.975193   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:49.975221   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:49.975534   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:49.975748   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:22:49.977287   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:22:49.977497   29384 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 22:22:49.977513   29384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 22:22:49.977528   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:22:49.980128   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.980544   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:22:49.980571   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:22:49.980744   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:22:49.980922   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:22:49.981062   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:22:49.981196   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:22:50.062971   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0827 22:22:50.169857   29384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 22:22:50.183206   29384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:22:50.605591   29384 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0827 22:22:50.605615   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.605635   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.605930   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.605958   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.605970   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.605985   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.605996   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.606208   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.606220   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.606235   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.606277   29384 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 22:22:50.606293   29384 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 22:22:50.606391   29384 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0827 22:22:50.606398   29384 round_trippers.go:469] Request Headers:
	I0827 22:22:50.606406   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:22:50.606409   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:22:50.619083   29384 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0827 22:22:50.619591   29384 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0827 22:22:50.619606   29384 round_trippers.go:469] Request Headers:
	I0827 22:22:50.619625   29384 round_trippers.go:473]     Content-Type: application/json
	I0827 22:22:50.619629   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:22:50.619633   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:22:50.623083   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:22:50.623255   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.623268   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.623535   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.623553   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.623587   29384 main.go:141] libmachine: (ha-158602) DBG | Closing plugin on server side
	I0827 22:22:50.977205   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.977233   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.977539   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.977559   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.977569   29384 main.go:141] libmachine: Making call to close driver server
	I0827 22:22:50.977579   29384 main.go:141] libmachine: (ha-158602) Calling .Close
	I0827 22:22:50.977813   29384 main.go:141] libmachine: Successfully made call to close driver server
	I0827 22:22:50.977826   29384 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 22:22:50.979490   29384 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0827 22:22:50.980882   29384 addons.go:510] duration metric: took 1.064849742s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0827 22:22:50.980912   29384 start.go:246] waiting for cluster config update ...
	I0827 22:22:50.980923   29384 start.go:255] writing updated cluster config ...
	I0827 22:22:50.982330   29384 out.go:201] 
	I0827 22:22:50.983724   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:22:50.983785   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:50.985492   29384 out.go:177] * Starting "ha-158602-m02" control-plane node in "ha-158602" cluster
	I0827 22:22:50.986474   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:22:50.986494   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:22:50.986581   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:22:50.986596   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:22:50.986663   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:22:50.986847   29384 start.go:360] acquireMachinesLock for ha-158602-m02: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:22:50.986893   29384 start.go:364] duration metric: took 25.735µs to acquireMachinesLock for "ha-158602-m02"
	I0827 22:22:50.986915   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:22:50.987012   29384 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0827 22:22:50.988953   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:22:50.989044   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:22:50.989075   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:22:51.003802   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0827 22:22:51.004211   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:22:51.004688   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:22:51.004709   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:22:51.004999   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:22:51.005166   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:22:51.005287   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:22:51.005453   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:22:51.005473   29384 client.go:168] LocalClient.Create starting
	I0827 22:22:51.005506   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:22:51.005543   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:51.005571   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:51.005642   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:22:51.005672   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:22:51.005689   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:22:51.005714   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:22:51.005727   29384 main.go:141] libmachine: (ha-158602-m02) Calling .PreCreateCheck
	I0827 22:22:51.005880   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:22:51.006209   29384 main.go:141] libmachine: Creating machine...
	I0827 22:22:51.006237   29384 main.go:141] libmachine: (ha-158602-m02) Calling .Create
	I0827 22:22:51.006350   29384 main.go:141] libmachine: (ha-158602-m02) Creating KVM machine...
	I0827 22:22:51.007588   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found existing default KVM network
	I0827 22:22:51.007721   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found existing private KVM network mk-ha-158602
	I0827 22:22:51.007864   29384 main.go:141] libmachine: (ha-158602-m02) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 ...
	I0827 22:22:51.007895   29384 main.go:141] libmachine: (ha-158602-m02) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:22:51.007962   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.007857   29745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:51.008102   29384 main.go:141] libmachine: (ha-158602-m02) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:22:51.244710   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.244579   29745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa...
	I0827 22:22:51.520653   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.520525   29745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/ha-158602-m02.rawdisk...
	I0827 22:22:51.520682   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Writing magic tar header
	I0827 22:22:51.520692   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Writing SSH key tar header
	I0827 22:22:51.520700   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:51.520661   29745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 ...
	I0827 22:22:51.520778   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02
	I0827 22:22:51.520828   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02 (perms=drwx------)
	I0827 22:22:51.520856   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:22:51.520872   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:22:51.520888   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:22:51.520897   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:22:51.520908   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:22:51.520920   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:22:51.520933   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Checking permissions on dir: /home
	I0827 22:22:51.520970   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:22:51.520986   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:22:51.520999   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:22:51.521015   29384 main.go:141] libmachine: (ha-158602-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:22:51.521030   29384 main.go:141] libmachine: (ha-158602-m02) Creating domain...
	I0827 22:22:51.521041   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Skipping /home - not owner
	I0827 22:22:51.521890   29384 main.go:141] libmachine: (ha-158602-m02) define libvirt domain using xml: 
	I0827 22:22:51.521903   29384 main.go:141] libmachine: (ha-158602-m02) <domain type='kvm'>
	I0827 22:22:51.521910   29384 main.go:141] libmachine: (ha-158602-m02)   <name>ha-158602-m02</name>
	I0827 22:22:51.521915   29384 main.go:141] libmachine: (ha-158602-m02)   <memory unit='MiB'>2200</memory>
	I0827 22:22:51.521923   29384 main.go:141] libmachine: (ha-158602-m02)   <vcpu>2</vcpu>
	I0827 22:22:51.521930   29384 main.go:141] libmachine: (ha-158602-m02)   <features>
	I0827 22:22:51.521942   29384 main.go:141] libmachine: (ha-158602-m02)     <acpi/>
	I0827 22:22:51.521949   29384 main.go:141] libmachine: (ha-158602-m02)     <apic/>
	I0827 22:22:51.521955   29384 main.go:141] libmachine: (ha-158602-m02)     <pae/>
	I0827 22:22:51.521961   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.521969   29384 main.go:141] libmachine: (ha-158602-m02)   </features>
	I0827 22:22:51.521979   29384 main.go:141] libmachine: (ha-158602-m02)   <cpu mode='host-passthrough'>
	I0827 22:22:51.522005   29384 main.go:141] libmachine: (ha-158602-m02)   
	I0827 22:22:51.522024   29384 main.go:141] libmachine: (ha-158602-m02)   </cpu>
	I0827 22:22:51.522037   29384 main.go:141] libmachine: (ha-158602-m02)   <os>
	I0827 22:22:51.522044   29384 main.go:141] libmachine: (ha-158602-m02)     <type>hvm</type>
	I0827 22:22:51.522054   29384 main.go:141] libmachine: (ha-158602-m02)     <boot dev='cdrom'/>
	I0827 22:22:51.522061   29384 main.go:141] libmachine: (ha-158602-m02)     <boot dev='hd'/>
	I0827 22:22:51.522076   29384 main.go:141] libmachine: (ha-158602-m02)     <bootmenu enable='no'/>
	I0827 22:22:51.522086   29384 main.go:141] libmachine: (ha-158602-m02)   </os>
	I0827 22:22:51.522106   29384 main.go:141] libmachine: (ha-158602-m02)   <devices>
	I0827 22:22:51.522132   29384 main.go:141] libmachine: (ha-158602-m02)     <disk type='file' device='cdrom'>
	I0827 22:22:51.522149   29384 main.go:141] libmachine: (ha-158602-m02)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/boot2docker.iso'/>
	I0827 22:22:51.522160   29384 main.go:141] libmachine: (ha-158602-m02)       <target dev='hdc' bus='scsi'/>
	I0827 22:22:51.522172   29384 main.go:141] libmachine: (ha-158602-m02)       <readonly/>
	I0827 22:22:51.522180   29384 main.go:141] libmachine: (ha-158602-m02)     </disk>
	I0827 22:22:51.522193   29384 main.go:141] libmachine: (ha-158602-m02)     <disk type='file' device='disk'>
	I0827 22:22:51.522207   29384 main.go:141] libmachine: (ha-158602-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:22:51.522226   29384 main.go:141] libmachine: (ha-158602-m02)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/ha-158602-m02.rawdisk'/>
	I0827 22:22:51.522238   29384 main.go:141] libmachine: (ha-158602-m02)       <target dev='hda' bus='virtio'/>
	I0827 22:22:51.522250   29384 main.go:141] libmachine: (ha-158602-m02)     </disk>
	I0827 22:22:51.522260   29384 main.go:141] libmachine: (ha-158602-m02)     <interface type='network'>
	I0827 22:22:51.522283   29384 main.go:141] libmachine: (ha-158602-m02)       <source network='mk-ha-158602'/>
	I0827 22:22:51.522300   29384 main.go:141] libmachine: (ha-158602-m02)       <model type='virtio'/>
	I0827 22:22:51.522312   29384 main.go:141] libmachine: (ha-158602-m02)     </interface>
	I0827 22:22:51.522322   29384 main.go:141] libmachine: (ha-158602-m02)     <interface type='network'>
	I0827 22:22:51.522332   29384 main.go:141] libmachine: (ha-158602-m02)       <source network='default'/>
	I0827 22:22:51.522338   29384 main.go:141] libmachine: (ha-158602-m02)       <model type='virtio'/>
	I0827 22:22:51.522345   29384 main.go:141] libmachine: (ha-158602-m02)     </interface>
	I0827 22:22:51.522352   29384 main.go:141] libmachine: (ha-158602-m02)     <serial type='pty'>
	I0827 22:22:51.522381   29384 main.go:141] libmachine: (ha-158602-m02)       <target port='0'/>
	I0827 22:22:51.522396   29384 main.go:141] libmachine: (ha-158602-m02)     </serial>
	I0827 22:22:51.522405   29384 main.go:141] libmachine: (ha-158602-m02)     <console type='pty'>
	I0827 22:22:51.522413   29384 main.go:141] libmachine: (ha-158602-m02)       <target type='serial' port='0'/>
	I0827 22:22:51.522424   29384 main.go:141] libmachine: (ha-158602-m02)     </console>
	I0827 22:22:51.522434   29384 main.go:141] libmachine: (ha-158602-m02)     <rng model='virtio'>
	I0827 22:22:51.522447   29384 main.go:141] libmachine: (ha-158602-m02)       <backend model='random'>/dev/random</backend>
	I0827 22:22:51.522457   29384 main.go:141] libmachine: (ha-158602-m02)     </rng>
	I0827 22:22:51.522465   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.522478   29384 main.go:141] libmachine: (ha-158602-m02)     
	I0827 22:22:51.522506   29384 main.go:141] libmachine: (ha-158602-m02)   </devices>
	I0827 22:22:51.522528   29384 main.go:141] libmachine: (ha-158602-m02) </domain>
	I0827 22:22:51.522542   29384 main.go:141] libmachine: (ha-158602-m02) 
	I0827 22:22:51.529093   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:c8:11:5e in network default
	I0827 22:22:51.529610   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring networks are active...
	I0827 22:22:51.529633   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:51.530655   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring network default is active
	I0827 22:22:51.531006   29384 main.go:141] libmachine: (ha-158602-m02) Ensuring network mk-ha-158602 is active
	I0827 22:22:51.531405   29384 main.go:141] libmachine: (ha-158602-m02) Getting domain xml...
	I0827 22:22:51.532192   29384 main.go:141] libmachine: (ha-158602-m02) Creating domain...
	I0827 22:22:52.755344   29384 main.go:141] libmachine: (ha-158602-m02) Waiting to get IP...
	I0827 22:22:52.756055   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:52.756425   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:52.756455   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:52.756415   29745 retry.go:31] will retry after 194.568413ms: waiting for machine to come up
	I0827 22:22:52.953024   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:52.953407   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:52.953434   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:52.953394   29745 retry.go:31] will retry after 325.007706ms: waiting for machine to come up
	I0827 22:22:53.280017   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:53.280646   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:53.280695   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:53.280532   29745 retry.go:31] will retry after 326.358818ms: waiting for machine to come up
	I0827 22:22:53.608162   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:53.608635   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:53.608661   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:53.608597   29745 retry.go:31] will retry after 573.876873ms: waiting for machine to come up
	I0827 22:22:54.184341   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:54.184903   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:54.184933   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:54.184861   29745 retry.go:31] will retry after 467.432481ms: waiting for machine to come up
	I0827 22:22:54.653558   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:54.653987   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:54.654003   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:54.653939   29745 retry.go:31] will retry after 932.113121ms: waiting for machine to come up
	I0827 22:22:55.588071   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:55.588548   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:55.588570   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:55.588507   29745 retry.go:31] will retry after 1.106053983s: waiting for machine to come up
	I0827 22:22:56.695946   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:56.696501   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:56.696527   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:56.696449   29745 retry.go:31] will retry after 1.180147184s: waiting for machine to come up
	I0827 22:22:57.877879   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:57.878219   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:57.878246   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:57.878186   29745 retry.go:31] will retry after 1.604135095s: waiting for machine to come up
	I0827 22:22:59.483523   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:22:59.484044   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:22:59.484070   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:22:59.483980   29745 retry.go:31] will retry after 2.081579241s: waiting for machine to come up
	I0827 22:23:01.567515   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:01.568007   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:01.568035   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:01.567958   29745 retry.go:31] will retry after 2.372701308s: waiting for machine to come up
	I0827 22:23:03.942705   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:03.943068   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:03.943090   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:03.943047   29745 retry.go:31] will retry after 3.144488032s: waiting for machine to come up
	I0827 22:23:07.088992   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:07.089281   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find current IP address of domain ha-158602-m02 in network mk-ha-158602
	I0827 22:23:07.089305   29384 main.go:141] libmachine: (ha-158602-m02) DBG | I0827 22:23:07.089253   29745 retry.go:31] will retry after 4.261022366s: waiting for machine to come up
	I0827 22:23:11.352145   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.352500   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has current primary IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.352526   29384 main.go:141] libmachine: (ha-158602-m02) Found IP for machine: 192.168.39.142
	I0827 22:23:11.352541   29384 main.go:141] libmachine: (ha-158602-m02) Reserving static IP address...
	I0827 22:23:11.352864   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find host DHCP lease matching {name: "ha-158602-m02", mac: "52:54:00:fa:7e:06", ip: "192.168.39.142"} in network mk-ha-158602
	I0827 22:23:11.426293   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Getting to WaitForSSH function...
	I0827 22:23:11.426351   29384 main.go:141] libmachine: (ha-158602-m02) Reserved static IP address: 192.168.39.142
	I0827 22:23:11.426366   29384 main.go:141] libmachine: (ha-158602-m02) Waiting for SSH to be available...
	I0827 22:23:11.429192   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:11.429602   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602
	I0827 22:23:11.429645   29384 main.go:141] libmachine: (ha-158602-m02) DBG | unable to find defined IP address of network mk-ha-158602 interface with MAC address 52:54:00:fa:7e:06
	I0827 22:23:11.429800   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH client type: external
	I0827 22:23:11.429825   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa (-rw-------)
	I0827 22:23:11.429892   29384 main.go:141] libmachine: (ha-158602-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:23:11.429925   29384 main.go:141] libmachine: (ha-158602-m02) DBG | About to run SSH command:
	I0827 22:23:11.429971   29384 main.go:141] libmachine: (ha-158602-m02) DBG | exit 0
	I0827 22:23:11.433467   29384 main.go:141] libmachine: (ha-158602-m02) DBG | SSH cmd err, output: exit status 255: 
	I0827 22:23:11.433491   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0827 22:23:11.433501   29384 main.go:141] libmachine: (ha-158602-m02) DBG | command : exit 0
	I0827 22:23:11.433509   29384 main.go:141] libmachine: (ha-158602-m02) DBG | err     : exit status 255
	I0827 22:23:11.433525   29384 main.go:141] libmachine: (ha-158602-m02) DBG | output  : 
	I0827 22:23:14.435633   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Getting to WaitForSSH function...
	I0827 22:23:14.438942   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.439399   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.439427   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.439591   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH client type: external
	I0827 22:23:14.439616   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa (-rw-------)
	I0827 22:23:14.439649   29384 main.go:141] libmachine: (ha-158602-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:23:14.439666   29384 main.go:141] libmachine: (ha-158602-m02) DBG | About to run SSH command:
	I0827 22:23:14.439683   29384 main.go:141] libmachine: (ha-158602-m02) DBG | exit 0
	I0827 22:23:14.560627   29384 main.go:141] libmachine: (ha-158602-m02) DBG | SSH cmd err, output: <nil>: 
	I0827 22:23:14.560871   29384 main.go:141] libmachine: (ha-158602-m02) KVM machine creation complete!
	I0827 22:23:14.561354   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:23:14.561929   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:14.562155   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:14.562361   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:23:14.562389   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:23:14.563859   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:23:14.563876   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:23:14.563886   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:23:14.563895   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.566614   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.566954   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.566976   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.567129   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.567287   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.567453   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.567603   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.567797   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.568056   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.568072   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:23:14.663565   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:23:14.663591   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:23:14.663599   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.666428   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.666794   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.666822   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.667033   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.667228   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.667397   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.667529   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.667677   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.667908   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.667920   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:23:14.764898   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:23:14.764966   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:23:14.764973   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:23:14.764994   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.765210   29384 buildroot.go:166] provisioning hostname "ha-158602-m02"
	I0827 22:23:14.765234   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.765378   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.767952   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.768354   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.768380   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.768574   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.768775   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.768928   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.769043   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.769178   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.769380   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.769400   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602-m02 && echo "ha-158602-m02" | sudo tee /etc/hostname
	I0827 22:23:14.876662   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602-m02
	
	I0827 22:23:14.876693   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.879304   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.879683   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.879717   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.879856   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:14.880131   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.880325   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:14.880475   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:14.880658   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:14.880814   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:14.880829   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:23:14.985181   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:23:14.985208   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:23:14.985226   29384 buildroot.go:174] setting up certificates
	I0827 22:23:14.985238   29384 provision.go:84] configureAuth start
	I0827 22:23:14.985249   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetMachineName
	I0827 22:23:14.985577   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:14.988233   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.988621   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.988654   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.988772   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:14.990837   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.991103   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:14.991133   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:14.991273   29384 provision.go:143] copyHostCerts
	I0827 22:23:14.991305   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:23:14.991344   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:23:14.991356   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:23:14.991437   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:23:14.991508   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:23:14.991525   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:23:14.991531   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:23:14.991555   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:23:14.991600   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:23:14.991617   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:23:14.991623   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:23:14.991645   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:23:14.991703   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602-m02 san=[127.0.0.1 192.168.39.142 ha-158602-m02 localhost minikube]
	I0827 22:23:15.100282   29384 provision.go:177] copyRemoteCerts
	I0827 22:23:15.100347   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:23:15.100370   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.102865   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.103160   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.103183   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.103346   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.103548   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.103673   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.103780   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.182993   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:23:15.183062   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 22:23:15.205343   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:23:15.205413   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:23:15.228193   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:23:15.228275   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:23:15.250829   29384 provision.go:87] duration metric: took 265.58072ms to configureAuth
	I0827 22:23:15.250855   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:23:15.251072   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:15.251145   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.253917   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.254355   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.254376   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.254553   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.254724   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.254873   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.255009   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.255202   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:15.255362   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:15.255375   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:23:15.465560   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:23:15.465592   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:23:15.465603   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetURL
	I0827 22:23:15.466932   29384 main.go:141] libmachine: (ha-158602-m02) DBG | Using libvirt version 6000000
	I0827 22:23:15.469084   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.469410   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.469442   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.469554   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:23:15.469568   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:23:15.469595   29384 client.go:171] duration metric: took 24.464104776s to LocalClient.Create
	I0827 22:23:15.469625   29384 start.go:167] duration metric: took 24.464170956s to libmachine.API.Create "ha-158602"
	I0827 22:23:15.469636   29384 start.go:293] postStartSetup for "ha-158602-m02" (driver="kvm2")
	I0827 22:23:15.469650   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:23:15.469672   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.469959   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:23:15.469982   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.472126   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.472495   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.472524   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.472652   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.472852   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.473029   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.473181   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.550537   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:23:15.554365   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:23:15.554393   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:23:15.554452   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:23:15.554542   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:23:15.554556   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:23:15.554658   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:23:15.563879   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:23:15.585230   29384 start.go:296] duration metric: took 115.581036ms for postStartSetup
	I0827 22:23:15.585280   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetConfigRaw
	I0827 22:23:15.585854   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:15.588435   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.588827   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.588847   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.589102   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:23:15.589314   29384 start.go:128] duration metric: took 24.602284134s to createHost
	I0827 22:23:15.589340   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.591310   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.591632   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.591660   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.591800   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.591938   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.592085   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.592174   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.592317   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:23:15.592544   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0827 22:23:15.592559   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:23:15.688858   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797395.666815643
	
	I0827 22:23:15.688881   29384 fix.go:216] guest clock: 1724797395.666815643
	I0827 22:23:15.688891   29384 fix.go:229] Guest: 2024-08-27 22:23:15.666815643 +0000 UTC Remote: 2024-08-27 22:23:15.589326478 +0000 UTC m=+69.897500846 (delta=77.489165ms)
	I0827 22:23:15.688909   29384 fix.go:200] guest clock delta is within tolerance: 77.489165ms
	I0827 22:23:15.688917   29384 start.go:83] releasing machines lock for "ha-158602-m02", held for 24.702011455s
	I0827 22:23:15.688941   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.689186   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:15.691448   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.691761   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.691786   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.694101   29384 out.go:177] * Found network options:
	I0827 22:23:15.695206   29384 out.go:177]   - NO_PROXY=192.168.39.77
	W0827 22:23:15.696336   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:23:15.696377   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.696887   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.697052   29384 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:23:15.697128   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:23:15.697169   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	W0827 22:23:15.697224   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:23:15.697276   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:23:15.697292   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:23:15.699753   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700017   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700121   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.700147   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700313   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.700413   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:15.700436   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:15.700508   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.700672   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.700694   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:23:15.700849   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.700864   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:23:15.701000   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:23:15.701144   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:23:15.932499   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:23:15.938103   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:23:15.938181   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:23:15.959322   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:23:15.959350   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:23:15.959407   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:23:15.978390   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:23:15.993171   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:23:15.993225   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:23:16.006779   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:23:16.020812   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:23:16.147380   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:23:16.319051   29384 docker.go:233] disabling docker service ...
	I0827 22:23:16.319135   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:23:16.332705   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:23:16.344782   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:23:16.462518   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:23:16.575127   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:23:16.589418   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:23:16.606616   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:23:16.606677   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.616833   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:23:16.616896   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.627069   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.636890   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.646636   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:23:16.656720   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.666297   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.682159   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:23:16.692011   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:23:16.700996   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:23:16.701067   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:23:16.714552   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:23:16.724642   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:16.830976   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:23:16.915581   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:23:16.915651   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:23:16.919989   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:23:16.920047   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:23:16.923656   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:23:16.960529   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:23:16.960621   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:23:16.986797   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:23:17.015475   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:23:17.016779   29384 out.go:177]   - env NO_PROXY=192.168.39.77
	I0827 22:23:17.018063   29384 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:23:17.020773   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:17.021153   29384 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:23:05 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:23:17.021190   29384 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:23:17.021416   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:23:17.025661   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:23:17.038035   29384 mustload.go:65] Loading cluster: ha-158602
	I0827 22:23:17.038200   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:17.038554   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:17.038580   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:17.053097   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0827 22:23:17.053473   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:17.053904   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:17.053924   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:17.054181   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:17.054376   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:23:17.056042   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:23:17.056327   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:17.056368   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:17.070703   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0827 22:23:17.071108   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:17.071593   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:17.071613   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:17.071879   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:17.072061   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:23:17.072269   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.142
	I0827 22:23:17.072285   29384 certs.go:194] generating shared ca certs ...
	I0827 22:23:17.072303   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.072432   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:23:17.072504   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:23:17.072519   29384 certs.go:256] generating profile certs ...
	I0827 22:23:17.072604   29384 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:23:17.072627   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267
	I0827 22:23:17.072639   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.254]
	I0827 22:23:17.116741   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 ...
	I0827 22:23:17.116768   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267: {Name:mk70b4f114965c8b6603d6433cb7a61c1c7912e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.116927   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267 ...
	I0827 22:23:17.116940   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267: {Name:mk8147ed32f4bc89d4feb83d8cd3d9f45e7b461e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:23:17.117024   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.4465f267 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:23:17.117148   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.4465f267 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:23:17.117272   29384 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:23:17.117285   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:23:17.117298   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:23:17.117318   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:23:17.117331   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:23:17.117343   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:23:17.117354   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:23:17.117364   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:23:17.117375   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:23:17.117421   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:23:17.117447   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:23:17.117456   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:23:17.117475   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:23:17.117496   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:23:17.117519   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:23:17.117555   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:23:17.117589   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.117603   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.117615   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.117642   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:23:17.120527   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:17.120915   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:23:17.120943   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:17.121066   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:23:17.121238   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:23:17.121367   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:23:17.121488   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:23:17.196819   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0827 22:23:17.201071   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0827 22:23:17.211087   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0827 22:23:17.215455   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0827 22:23:17.225740   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0827 22:23:17.229475   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0827 22:23:17.239004   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0827 22:23:17.242794   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0827 22:23:17.252194   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0827 22:23:17.255806   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0827 22:23:17.264992   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0827 22:23:17.268820   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0827 22:23:17.278569   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:23:17.301784   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:23:17.324240   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:23:17.346025   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:23:17.367550   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0827 22:23:17.389149   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:23:17.411062   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:23:17.432734   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:23:17.455367   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:23:17.477466   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:23:17.499572   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:23:17.521706   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0827 22:23:17.536474   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0827 22:23:17.551438   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0827 22:23:17.566840   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0827 22:23:17.582029   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0827 22:23:17.597562   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0827 22:23:17.612284   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0827 22:23:17.627253   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:23:17.632437   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:23:17.642395   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.646396   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.646433   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:23:17.651638   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:23:17.661370   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:23:17.671124   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.675273   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.675318   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:23:17.680489   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:23:17.690088   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:23:17.699733   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.703738   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.703778   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:23:17.708689   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:23:17.718392   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:23:17.721896   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:23:17.721954   29384 kubeadm.go:934] updating node {m02 192.168.39.142 8443 v1.31.0 crio true true} ...
	I0827 22:23:17.722032   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:23:17.722057   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:23:17.722083   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:23:17.737084   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:23:17.737154   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:23:17.737208   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:23:17.746337   29384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0827 22:23:17.746386   29384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0827 22:23:17.754816   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0827 22:23:17.754838   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:23:17.754847   29384 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0827 22:23:17.754889   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:23:17.754815   29384 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0827 22:23:17.759972   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0827 22:23:17.760005   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0827 22:23:18.689698   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:23:18.689792   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:23:18.695364   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0827 22:23:18.695401   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0827 22:23:18.858280   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:23:18.889059   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:23:18.889171   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:23:18.901142   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0827 22:23:18.901176   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0827 22:23:19.228635   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0827 22:23:19.238415   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0827 22:23:19.254976   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:23:19.270796   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:23:19.286360   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:23:19.290233   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:23:19.302822   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:19.418817   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:23:19.436857   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:23:19.437265   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:23:19.437314   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:23:19.452544   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0827 22:23:19.453031   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:23:19.453525   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:23:19.453544   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:23:19.453889   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:23:19.454107   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:23:19.454258   29384 start.go:317] joinCluster: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:23:19.454350   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0827 22:23:19.454370   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:23:19.457214   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:19.457649   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:23:19.457674   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:23:19.457830   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:23:19.457988   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:23:19.458132   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:23:19.458273   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:23:19.597839   29384 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:23:19.597880   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ucr0iw.a616mktqyqppgnwr --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m02 --control-plane --apiserver-advertise-address=192.168.39.142 --apiserver-bind-port=8443"
	I0827 22:23:41.399875   29384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ucr0iw.a616mktqyqppgnwr --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m02 --control-plane --apiserver-advertise-address=192.168.39.142 --apiserver-bind-port=8443": (21.801972228s)
	I0827 22:23:41.399915   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0827 22:23:41.847756   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602-m02 minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=false
	I0827 22:23:41.970431   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-158602-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0827 22:23:42.092283   29384 start.go:319] duration metric: took 22.63801931s to joinCluster
	I0827 22:23:42.092371   29384 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:23:42.092716   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:23:42.093923   29384 out.go:177] * Verifying Kubernetes components...
	I0827 22:23:42.095489   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:23:42.337315   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:23:42.360051   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:23:42.360395   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0827 22:23:42.360509   29384 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.77:8443
	I0827 22:23:42.360816   29384 node_ready.go:35] waiting up to 6m0s for node "ha-158602-m02" to be "Ready" ...
	I0827 22:23:42.360931   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:42.360943   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:42.360954   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:42.360965   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:42.371816   29384 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0827 22:23:42.861719   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:42.861739   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:42.861751   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:42.861756   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:42.867443   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:23:43.361465   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:43.361489   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:43.361500   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:43.361506   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:43.368142   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:23:43.861711   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:43.861737   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:43.861748   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:43.861755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:43.864816   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:44.361761   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:44.361782   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:44.361788   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:44.361793   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:44.365264   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:44.365782   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:44.861642   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:44.861669   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:44.861681   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:44.861687   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:44.864853   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:45.361722   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:45.361743   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:45.361751   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:45.361755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:45.365102   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:45.861804   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:45.861832   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:45.861843   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:45.861849   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:45.865089   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.361335   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:46.361361   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:46.361371   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:46.361377   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:46.364754   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.861229   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:46.861250   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:46.861258   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:46.861263   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:46.864782   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:46.865391   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:47.361745   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:47.361770   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:47.361782   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:47.361790   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:47.364768   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:47.861755   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:47.861781   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:47.861788   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:47.861793   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:47.864844   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.361704   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:48.361724   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:48.361732   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:48.361735   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:48.364864   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.861716   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:48.861753   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:48.861765   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:48.861772   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:48.864993   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:48.865688   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:49.361696   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:49.361714   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:49.361722   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:49.361727   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:49.364009   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:49.861323   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:49.861371   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:49.861383   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:49.861390   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:49.864399   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:50.361738   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:50.361763   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:50.361780   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:50.361785   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:50.372425   29384 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0827 22:23:50.861692   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:50.861712   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:50.861719   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:50.861724   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:50.864315   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:51.361563   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:51.361588   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:51.361601   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:51.361606   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:51.364710   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:51.365212   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:51.861601   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:51.861628   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:51.861639   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:51.861644   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:51.864745   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:52.361691   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:52.361716   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:52.361727   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:52.361733   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:52.364864   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:52.861694   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:52.861716   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:52.861727   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:52.861732   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:52.865072   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:53.361245   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:53.361268   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:53.361279   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:53.361284   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:53.364123   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:53.862011   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:53.862037   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:53.862048   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:53.862054   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:53.866913   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:23:53.867501   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:54.361676   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:54.361701   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:54.361709   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:54.361713   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:54.364743   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:54.861208   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:54.861230   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:54.861239   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:54.861243   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:54.863841   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:55.361750   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:55.361781   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:55.361793   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:55.361798   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:55.368246   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:23:55.861192   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:55.861219   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:55.861235   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:55.861240   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:55.864110   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:56.361561   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:56.361580   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:56.361600   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:56.361606   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:56.364724   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:56.365228   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:56.861717   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:56.861741   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:56.861749   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:56.861753   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:56.865067   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:57.361760   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:57.361786   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:57.361798   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:57.361804   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:57.364673   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:57.861733   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:57.861756   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:57.861767   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:57.861777   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:57.864796   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:58.361725   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:58.361746   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:58.361754   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:58.361758   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:58.365625   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:23:58.366190   29384 node_ready.go:53] node "ha-158602-m02" has status "Ready":"False"
	I0827 22:23:58.861364   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:58.861386   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:58.861394   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:58.861398   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:58.864292   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:59.362002   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:59.362027   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:59.362036   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:59.362041   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:59.365024   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:23:59.861335   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:23:59.861369   29384 round_trippers.go:469] Request Headers:
	I0827 22:23:59.861378   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:23:59.861382   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:23:59.864212   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.361420   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.361446   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.361455   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.361459   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.364515   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.860974   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.861002   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.861013   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.861019   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.864222   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.864966   29384 node_ready.go:49] node "ha-158602-m02" has status "Ready":"True"
	I0827 22:24:00.864982   29384 node_ready.go:38] duration metric: took 18.504142957s for node "ha-158602-m02" to be "Ready" ...
	I0827 22:24:00.864991   29384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:24:00.865070   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:00.865081   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.865088   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.865094   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.869052   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:00.874795   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.874865   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxzgs
	I0827 22:24:00.874871   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.874878   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.874882   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.877799   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.878375   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.878389   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.878397   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.878401   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.880710   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.881163   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.881179   29384 pod_ready.go:82] duration metric: took 6.360916ms for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.881188   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.881233   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x6dcd
	I0827 22:24:00.881240   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.881247   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.881252   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.883599   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.884224   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.884237   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.884244   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.884248   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.886706   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.887223   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.887239   29384 pod_ready.go:82] duration metric: took 6.045435ms for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.887247   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.887325   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602
	I0827 22:24:00.887335   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.887342   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.887359   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.889398   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.890037   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:00.890052   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.890060   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.890063   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.892148   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.892760   29384 pod_ready.go:93] pod "etcd-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.892784   29384 pod_ready.go:82] duration metric: took 5.530261ms for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.892796   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.892842   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m02
	I0827 22:24:00.892850   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.892857   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.892860   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.895124   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.895601   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:00.895621   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:00.895629   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:00.895635   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:00.897675   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:00.898231   29384 pod_ready.go:93] pod "etcd-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:00.898248   29384 pod_ready.go:82] duration metric: took 5.445558ms for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:00.898261   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.061746   29384 request.go:632] Waited for 163.434873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:24:01.061822   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:24:01.061831   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.061846   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.061852   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.065188   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.261601   29384 request.go:632] Waited for 195.377899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:01.261653   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:01.261658   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.261666   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.261671   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.264407   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:01.265048   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:01.265068   29384 pod_ready.go:82] duration metric: took 366.801663ms for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.265078   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.460991   29384 request.go:632] Waited for 195.852895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:24:01.461056   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:24:01.461061   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.461068   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.461072   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.464405   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.661661   29384 request.go:632] Waited for 196.322387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:01.661722   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:01.661735   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.661755   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.661778   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.665536   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:01.666159   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:01.666177   29384 pod_ready.go:82] duration metric: took 401.092427ms for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.666189   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:01.861306   29384 request.go:632] Waited for 195.042639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:24:01.861414   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:24:01.861427   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:01.861437   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:01.861445   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:01.864421   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.061461   29384 request.go:632] Waited for 196.404456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.061514   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.061520   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.061530   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.061545   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.064495   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.064970   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.064988   29384 pod_ready.go:82] duration metric: took 398.791787ms for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.064997   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.261529   29384 request.go:632] Waited for 196.463267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:24:02.261583   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:24:02.261590   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.261600   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.261605   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.264684   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:02.461834   29384 request.go:632] Waited for 196.352983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:02.461899   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:02.461904   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.461912   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.461915   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.465015   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:02.465502   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.465520   29384 pod_ready.go:82] duration metric: took 400.516744ms for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.465532   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.661656   29384 request.go:632] Waited for 196.035045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:24:02.661715   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:24:02.661720   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.661728   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.661733   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.666595   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:02.861627   29384 request.go:632] Waited for 194.390829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.861684   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:02.861689   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:02.861698   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:02.861703   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:02.864690   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:02.865351   29384 pod_ready.go:93] pod "kube-proxy-5pmrv" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:02.865376   29384 pod_ready.go:82] duration metric: took 399.833719ms for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:02.865385   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.061431   29384 request.go:632] Waited for 195.967993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:24:03.061492   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:24:03.061499   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.061510   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.061520   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.064456   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:03.261507   29384 request.go:632] Waited for 196.385048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:03.261571   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:03.261578   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.261589   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.261595   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.264613   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:03.265201   29384 pod_ready.go:93] pod "kube-proxy-slgmm" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:03.265220   29384 pod_ready.go:82] duration metric: took 399.828388ms for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.265232   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.461417   29384 request.go:632] Waited for 196.094406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:24:03.461481   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:24:03.461489   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.461499   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.461506   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.466110   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:03.661058   29384 request.go:632] Waited for 194.303204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:03.661142   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:24:03.661152   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.661159   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.661164   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.664494   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:03.665204   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:03.665222   29384 pod_ready.go:82] duration metric: took 399.982907ms for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.665231   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:03.861342   29384 request.go:632] Waited for 196.034031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:24:03.861402   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:24:03.861407   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:03.861416   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:03.861420   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:03.864317   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:24:04.061128   29384 request.go:632] Waited for 196.306564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:04.061209   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:24:04.061215   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.061223   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.061227   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.064333   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.064792   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:24:04.064811   29384 pod_ready.go:82] duration metric: took 399.574125ms for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:24:04.064821   29384 pod_ready.go:39] duration metric: took 3.199819334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:24:04.064837   29384 api_server.go:52] waiting for apiserver process to appear ...
	I0827 22:24:04.064892   29384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:24:04.081133   29384 api_server.go:72] duration metric: took 21.988731021s to wait for apiserver process to appear ...
	I0827 22:24:04.081153   29384 api_server.go:88] waiting for apiserver healthz status ...
	I0827 22:24:04.081181   29384 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0827 22:24:04.085562   29384 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0827 22:24:04.085666   29384 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I0827 22:24:04.085676   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.085683   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.085688   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.086542   29384 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0827 22:24:04.086702   29384 api_server.go:141] control plane version: v1.31.0
	I0827 22:24:04.086720   29384 api_server.go:131] duration metric: took 5.560987ms to wait for apiserver health ...
	I0827 22:24:04.086730   29384 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 22:24:04.261058   29384 request.go:632] Waited for 174.261561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.261147   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.261156   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.261168   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.261179   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.265764   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:24:04.271274   29384 system_pods.go:59] 17 kube-system pods found
	I0827 22:24:04.271301   29384 system_pods.go:61] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:24:04.271306   29384 system_pods.go:61] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:24:04.271310   29384 system_pods.go:61] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:24:04.271313   29384 system_pods.go:61] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:24:04.271319   29384 system_pods.go:61] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:24:04.271323   29384 system_pods.go:61] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:24:04.271329   29384 system_pods.go:61] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:24:04.271334   29384 system_pods.go:61] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:24:04.271339   29384 system_pods.go:61] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:24:04.271344   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:24:04.271351   29384 system_pods.go:61] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:24:04.271356   29384 system_pods.go:61] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:24:04.271363   29384 system_pods.go:61] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:24:04.271366   29384 system_pods.go:61] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:24:04.271369   29384 system_pods.go:61] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:24:04.271372   29384 system_pods.go:61] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:24:04.271375   29384 system_pods.go:61] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:24:04.271383   29384 system_pods.go:74] duration metric: took 184.647807ms to wait for pod list to return data ...
	I0827 22:24:04.271393   29384 default_sa.go:34] waiting for default service account to be created ...
	I0827 22:24:04.461890   29384 request.go:632] Waited for 190.422827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:24:04.461984   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:24:04.461999   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.462010   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.462016   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.465756   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.465952   29384 default_sa.go:45] found service account: "default"
	I0827 22:24:04.465967   29384 default_sa.go:55] duration metric: took 194.566523ms for default service account to be created ...
	I0827 22:24:04.465974   29384 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 22:24:04.661389   29384 request.go:632] Waited for 195.3503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.661453   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:24:04.661458   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.661466   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.661472   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.666509   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:24:04.672801   29384 system_pods.go:86] 17 kube-system pods found
	I0827 22:24:04.672827   29384 system_pods.go:89] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:24:04.672832   29384 system_pods.go:89] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:24:04.672836   29384 system_pods.go:89] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:24:04.672840   29384 system_pods.go:89] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:24:04.672844   29384 system_pods.go:89] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:24:04.672847   29384 system_pods.go:89] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:24:04.672850   29384 system_pods.go:89] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:24:04.672855   29384 system_pods.go:89] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:24:04.672858   29384 system_pods.go:89] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:24:04.672862   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:24:04.672865   29384 system_pods.go:89] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:24:04.672869   29384 system_pods.go:89] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:24:04.672875   29384 system_pods.go:89] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:24:04.672878   29384 system_pods.go:89] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:24:04.672884   29384 system_pods.go:89] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:24:04.672888   29384 system_pods.go:89] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:24:04.672892   29384 system_pods.go:89] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:24:04.672898   29384 system_pods.go:126] duration metric: took 206.919567ms to wait for k8s-apps to be running ...
	I0827 22:24:04.672907   29384 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 22:24:04.672949   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:24:04.688955   29384 system_svc.go:56] duration metric: took 16.039406ms WaitForService to wait for kubelet
	I0827 22:24:04.688987   29384 kubeadm.go:582] duration metric: took 22.596587501s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:24:04.689004   29384 node_conditions.go:102] verifying NodePressure condition ...
	I0827 22:24:04.861434   29384 request.go:632] Waited for 172.327417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I0827 22:24:04.861483   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I0827 22:24:04.861488   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:04.861496   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:04.861500   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:04.864922   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:04.865734   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:24:04.865757   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:24:04.865769   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:24:04.865772   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:24:04.865776   29384 node_conditions.go:105] duration metric: took 176.767658ms to run NodePressure ...
	I0827 22:24:04.865787   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:24:04.865809   29384 start.go:255] writing updated cluster config ...
	I0827 22:24:04.867803   29384 out.go:201] 
	I0827 22:24:04.869186   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:04.869273   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:04.870872   29384 out.go:177] * Starting "ha-158602-m03" control-plane node in "ha-158602" cluster
	I0827 22:24:04.872079   29384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:24:04.872097   29384 cache.go:56] Caching tarball of preloaded images
	I0827 22:24:04.872187   29384 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:24:04.872199   29384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:24:04.872282   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:04.872436   29384 start.go:360] acquireMachinesLock for ha-158602-m03: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:24:04.872503   29384 start.go:364] duration metric: took 46.449µs to acquireMachinesLock for "ha-158602-m03"
	I0827 22:24:04.872524   29384 start.go:93] Provisioning new machine with config: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:04.872619   29384 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0827 22:24:04.873955   29384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 22:24:04.874037   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:04.874072   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:04.889205   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45711
	I0827 22:24:04.889637   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:04.890081   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:04.890104   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:04.890428   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:04.890668   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:04.890812   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:04.890978   29384 start.go:159] libmachine.API.Create for "ha-158602" (driver="kvm2")
	I0827 22:24:04.891007   29384 client.go:168] LocalClient.Create starting
	I0827 22:24:04.891037   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 22:24:04.891069   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:24:04.891083   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:24:04.891130   29384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 22:24:04.891148   29384 main.go:141] libmachine: Decoding PEM data...
	I0827 22:24:04.891161   29384 main.go:141] libmachine: Parsing certificate...
	I0827 22:24:04.891182   29384 main.go:141] libmachine: Running pre-create checks...
	I0827 22:24:04.891190   29384 main.go:141] libmachine: (ha-158602-m03) Calling .PreCreateCheck
	I0827 22:24:04.891345   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:04.891744   29384 main.go:141] libmachine: Creating machine...
	I0827 22:24:04.891758   29384 main.go:141] libmachine: (ha-158602-m03) Calling .Create
	I0827 22:24:04.891912   29384 main.go:141] libmachine: (ha-158602-m03) Creating KVM machine...
	I0827 22:24:04.893080   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found existing default KVM network
	I0827 22:24:04.893222   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found existing private KVM network mk-ha-158602
	I0827 22:24:04.893349   29384 main.go:141] libmachine: (ha-158602-m03) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 ...
	I0827 22:24:04.893377   29384 main.go:141] libmachine: (ha-158602-m03) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 22:24:04.893425   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:04.893338   30149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:24:04.893519   29384 main.go:141] libmachine: (ha-158602-m03) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 22:24:05.125864   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.125741   30149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa...
	I0827 22:24:05.363185   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.363057   30149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/ha-158602-m03.rawdisk...
	I0827 22:24:05.363221   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Writing magic tar header
	I0827 22:24:05.363238   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Writing SSH key tar header
	I0827 22:24:05.363252   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:05.363166   30149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 ...
	I0827 22:24:05.363334   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03
	I0827 22:24:05.363373   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 22:24:05.363386   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03 (perms=drwx------)
	I0827 22:24:05.363405   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 22:24:05.363418   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 22:24:05.363438   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 22:24:05.363459   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 22:24:05.363474   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:24:05.363500   29384 main.go:141] libmachine: (ha-158602-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 22:24:05.363521   29384 main.go:141] libmachine: (ha-158602-m03) Creating domain...
	I0827 22:24:05.363537   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 22:24:05.363557   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 22:24:05.363571   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home/jenkins
	I0827 22:24:05.363583   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Checking permissions on dir: /home
	I0827 22:24:05.363593   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Skipping /home - not owner
	I0827 22:24:05.364565   29384 main.go:141] libmachine: (ha-158602-m03) define libvirt domain using xml: 
	I0827 22:24:05.364587   29384 main.go:141] libmachine: (ha-158602-m03) <domain type='kvm'>
	I0827 22:24:05.364598   29384 main.go:141] libmachine: (ha-158602-m03)   <name>ha-158602-m03</name>
	I0827 22:24:05.364609   29384 main.go:141] libmachine: (ha-158602-m03)   <memory unit='MiB'>2200</memory>
	I0827 22:24:05.364621   29384 main.go:141] libmachine: (ha-158602-m03)   <vcpu>2</vcpu>
	I0827 22:24:05.364632   29384 main.go:141] libmachine: (ha-158602-m03)   <features>
	I0827 22:24:05.364642   29384 main.go:141] libmachine: (ha-158602-m03)     <acpi/>
	I0827 22:24:05.364655   29384 main.go:141] libmachine: (ha-158602-m03)     <apic/>
	I0827 22:24:05.364666   29384 main.go:141] libmachine: (ha-158602-m03)     <pae/>
	I0827 22:24:05.364674   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.364685   29384 main.go:141] libmachine: (ha-158602-m03)   </features>
	I0827 22:24:05.364691   29384 main.go:141] libmachine: (ha-158602-m03)   <cpu mode='host-passthrough'>
	I0827 22:24:05.364699   29384 main.go:141] libmachine: (ha-158602-m03)   
	I0827 22:24:05.364704   29384 main.go:141] libmachine: (ha-158602-m03)   </cpu>
	I0827 22:24:05.364712   29384 main.go:141] libmachine: (ha-158602-m03)   <os>
	I0827 22:24:05.364720   29384 main.go:141] libmachine: (ha-158602-m03)     <type>hvm</type>
	I0827 22:24:05.364732   29384 main.go:141] libmachine: (ha-158602-m03)     <boot dev='cdrom'/>
	I0827 22:24:05.364745   29384 main.go:141] libmachine: (ha-158602-m03)     <boot dev='hd'/>
	I0827 22:24:05.364757   29384 main.go:141] libmachine: (ha-158602-m03)     <bootmenu enable='no'/>
	I0827 22:24:05.364767   29384 main.go:141] libmachine: (ha-158602-m03)   </os>
	I0827 22:24:05.364775   29384 main.go:141] libmachine: (ha-158602-m03)   <devices>
	I0827 22:24:05.364786   29384 main.go:141] libmachine: (ha-158602-m03)     <disk type='file' device='cdrom'>
	I0827 22:24:05.364801   29384 main.go:141] libmachine: (ha-158602-m03)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/boot2docker.iso'/>
	I0827 22:24:05.364815   29384 main.go:141] libmachine: (ha-158602-m03)       <target dev='hdc' bus='scsi'/>
	I0827 22:24:05.364832   29384 main.go:141] libmachine: (ha-158602-m03)       <readonly/>
	I0827 22:24:05.364846   29384 main.go:141] libmachine: (ha-158602-m03)     </disk>
	I0827 22:24:05.364859   29384 main.go:141] libmachine: (ha-158602-m03)     <disk type='file' device='disk'>
	I0827 22:24:05.364871   29384 main.go:141] libmachine: (ha-158602-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 22:24:05.364886   29384 main.go:141] libmachine: (ha-158602-m03)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/ha-158602-m03.rawdisk'/>
	I0827 22:24:05.364897   29384 main.go:141] libmachine: (ha-158602-m03)       <target dev='hda' bus='virtio'/>
	I0827 22:24:05.364907   29384 main.go:141] libmachine: (ha-158602-m03)     </disk>
	I0827 22:24:05.364918   29384 main.go:141] libmachine: (ha-158602-m03)     <interface type='network'>
	I0827 22:24:05.364930   29384 main.go:141] libmachine: (ha-158602-m03)       <source network='mk-ha-158602'/>
	I0827 22:24:05.364941   29384 main.go:141] libmachine: (ha-158602-m03)       <model type='virtio'/>
	I0827 22:24:05.364949   29384 main.go:141] libmachine: (ha-158602-m03)     </interface>
	I0827 22:24:05.364959   29384 main.go:141] libmachine: (ha-158602-m03)     <interface type='network'>
	I0827 22:24:05.364973   29384 main.go:141] libmachine: (ha-158602-m03)       <source network='default'/>
	I0827 22:24:05.364985   29384 main.go:141] libmachine: (ha-158602-m03)       <model type='virtio'/>
	I0827 22:24:05.364995   29384 main.go:141] libmachine: (ha-158602-m03)     </interface>
	I0827 22:24:05.365003   29384 main.go:141] libmachine: (ha-158602-m03)     <serial type='pty'>
	I0827 22:24:05.365014   29384 main.go:141] libmachine: (ha-158602-m03)       <target port='0'/>
	I0827 22:24:05.365025   29384 main.go:141] libmachine: (ha-158602-m03)     </serial>
	I0827 22:24:05.365036   29384 main.go:141] libmachine: (ha-158602-m03)     <console type='pty'>
	I0827 22:24:05.365044   29384 main.go:141] libmachine: (ha-158602-m03)       <target type='serial' port='0'/>
	I0827 22:24:05.365056   29384 main.go:141] libmachine: (ha-158602-m03)     </console>
	I0827 22:24:05.365066   29384 main.go:141] libmachine: (ha-158602-m03)     <rng model='virtio'>
	I0827 22:24:05.365078   29384 main.go:141] libmachine: (ha-158602-m03)       <backend model='random'>/dev/random</backend>
	I0827 22:24:05.365089   29384 main.go:141] libmachine: (ha-158602-m03)     </rng>
	I0827 22:24:05.365099   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.365107   29384 main.go:141] libmachine: (ha-158602-m03)     
	I0827 22:24:05.365113   29384 main.go:141] libmachine: (ha-158602-m03)   </devices>
	I0827 22:24:05.365125   29384 main.go:141] libmachine: (ha-158602-m03) </domain>
	I0827 22:24:05.365131   29384 main.go:141] libmachine: (ha-158602-m03) 
	I0827 22:24:05.372087   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:3e:7a:6b in network default
	I0827 22:24:05.372733   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring networks are active...
	I0827 22:24:05.372756   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:05.373716   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring network default is active
	I0827 22:24:05.374012   29384 main.go:141] libmachine: (ha-158602-m03) Ensuring network mk-ha-158602 is active
	I0827 22:24:05.374445   29384 main.go:141] libmachine: (ha-158602-m03) Getting domain xml...
	I0827 22:24:05.375267   29384 main.go:141] libmachine: (ha-158602-m03) Creating domain...
	I0827 22:24:06.609947   29384 main.go:141] libmachine: (ha-158602-m03) Waiting to get IP...
	I0827 22:24:06.610674   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:06.611152   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:06.611177   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:06.611113   30149 retry.go:31] will retry after 220.771743ms: waiting for machine to come up
	I0827 22:24:06.833726   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:06.834179   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:06.834206   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:06.834158   30149 retry.go:31] will retry after 323.861578ms: waiting for machine to come up
	I0827 22:24:07.159673   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.160206   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.160239   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.160149   30149 retry.go:31] will retry after 297.83033ms: waiting for machine to come up
	I0827 22:24:07.459728   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.460226   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.460249   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.460180   30149 retry.go:31] will retry after 438.110334ms: waiting for machine to come up
	I0827 22:24:07.899697   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:07.900092   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:07.900113   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:07.900051   30149 retry.go:31] will retry after 575.629093ms: waiting for machine to come up
	I0827 22:24:08.476870   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:08.477464   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:08.477496   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:08.477409   30149 retry.go:31] will retry after 621.866439ms: waiting for machine to come up
	I0827 22:24:09.101439   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:09.101895   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:09.101924   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:09.101836   30149 retry.go:31] will retry after 983.692714ms: waiting for machine to come up
	I0827 22:24:10.087444   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:10.087967   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:10.087999   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:10.087891   30149 retry.go:31] will retry after 983.631541ms: waiting for machine to come up
	I0827 22:24:11.072907   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:11.073346   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:11.073377   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:11.073309   30149 retry.go:31] will retry after 1.80000512s: waiting for machine to come up
	I0827 22:24:12.875166   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:12.875490   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:12.875522   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:12.875469   30149 retry.go:31] will retry after 2.085011068s: waiting for machine to come up
	I0827 22:24:14.962334   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:14.962817   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:14.962845   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:14.962781   30149 retry.go:31] will retry after 2.169328394s: waiting for machine to come up
	I0827 22:24:17.134398   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:17.134825   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:17.134851   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:17.134779   30149 retry.go:31] will retry after 2.479018152s: waiting for machine to come up
	I0827 22:24:19.616301   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:19.616679   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:19.616703   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:19.616636   30149 retry.go:31] will retry after 4.325988713s: waiting for machine to come up
	I0827 22:24:23.947128   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:23.947587   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find current IP address of domain ha-158602-m03 in network mk-ha-158602
	I0827 22:24:23.947608   29384 main.go:141] libmachine: (ha-158602-m03) DBG | I0827 22:24:23.947559   30149 retry.go:31] will retry after 4.889309517s: waiting for machine to come up
	I0827 22:24:28.841489   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.842062   29384 main.go:141] libmachine: (ha-158602-m03) Found IP for machine: 192.168.39.91
	I0827 22:24:28.842087   29384 main.go:141] libmachine: (ha-158602-m03) Reserving static IP address...
	I0827 22:24:28.842103   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has current primary IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.842468   29384 main.go:141] libmachine: (ha-158602-m03) DBG | unable to find host DHCP lease matching {name: "ha-158602-m03", mac: "52:54:00:5e:4d:2e", ip: "192.168.39.91"} in network mk-ha-158602
	I0827 22:24:28.916856   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Getting to WaitForSSH function...
	I0827 22:24:28.916884   29384 main.go:141] libmachine: (ha-158602-m03) Reserved static IP address: 192.168.39.91
	I0827 22:24:28.916897   29384 main.go:141] libmachine: (ha-158602-m03) Waiting for SSH to be available...
	I0827 22:24:28.919631   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.919985   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:28.920015   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:28.920155   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using SSH client type: external
	I0827 22:24:28.920185   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa (-rw-------)
	I0827 22:24:28.920225   29384 main.go:141] libmachine: (ha-158602-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 22:24:28.920247   29384 main.go:141] libmachine: (ha-158602-m03) DBG | About to run SSH command:
	I0827 22:24:28.920266   29384 main.go:141] libmachine: (ha-158602-m03) DBG | exit 0
	I0827 22:24:29.044609   29384 main.go:141] libmachine: (ha-158602-m03) DBG | SSH cmd err, output: <nil>: 
	I0827 22:24:29.044991   29384 main.go:141] libmachine: (ha-158602-m03) KVM machine creation complete!
	I0827 22:24:29.045244   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:29.045854   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:29.046062   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:29.046231   29384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 22:24:29.046248   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:24:29.047459   29384 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 22:24:29.047474   29384 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 22:24:29.047481   29384 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 22:24:29.047489   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.049787   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.050279   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.050306   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.050594   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.050775   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.050916   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.051058   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.051188   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.051385   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.051399   29384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 22:24:29.151732   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:24:29.151754   29384 main.go:141] libmachine: Detecting the provisioner...
	I0827 22:24:29.151764   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.154524   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.154867   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.154902   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.155058   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.155232   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.155354   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.155468   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.155694   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.155885   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.155900   29384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 22:24:29.257207   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 22:24:29.257298   29384 main.go:141] libmachine: found compatible host: buildroot
	I0827 22:24:29.257313   29384 main.go:141] libmachine: Provisioning with buildroot...
	I0827 22:24:29.257326   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.257573   29384 buildroot.go:166] provisioning hostname "ha-158602-m03"
	I0827 22:24:29.257599   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.257800   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.260826   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.261209   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.261236   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.261525   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.261742   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.261929   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.262053   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.262334   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.262556   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.262573   29384 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602-m03 && echo "ha-158602-m03" | sudo tee /etc/hostname
	I0827 22:24:29.380133   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602-m03
	
	I0827 22:24:29.380160   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.383586   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.384086   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.384115   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.384352   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.384582   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.384775   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.385106   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.385331   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.385537   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.385553   29384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:24:29.492854   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:24:29.492886   29384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:24:29.492907   29384 buildroot.go:174] setting up certificates
	I0827 22:24:29.492919   29384 provision.go:84] configureAuth start
	I0827 22:24:29.492930   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetMachineName
	I0827 22:24:29.493253   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:29.496205   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.496676   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.496706   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.496850   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.499310   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.499811   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.499839   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.500014   29384 provision.go:143] copyHostCerts
	I0827 22:24:29.500042   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:24:29.500069   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:24:29.500079   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:24:29.500145   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:24:29.500221   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:24:29.500247   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:24:29.500257   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:24:29.500296   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:24:29.500368   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:24:29.500388   29384 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:24:29.500394   29384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:24:29.500419   29384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:24:29.500488   29384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602-m03 san=[127.0.0.1 192.168.39.91 ha-158602-m03 localhost minikube]
	I0827 22:24:29.630247   29384 provision.go:177] copyRemoteCerts
	I0827 22:24:29.630300   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:24:29.630323   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.633003   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.633438   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.633464   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.633664   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.633858   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.634021   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.634153   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:29.714965   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:24:29.715031   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:24:29.738180   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:24:29.738256   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 22:24:29.761405   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:24:29.761482   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:24:29.785628   29384 provision.go:87] duration metric: took 292.694937ms to configureAuth
	I0827 22:24:29.785657   29384 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:24:29.785858   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:29.785943   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:29.788766   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.789195   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:29.789217   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:29.789406   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:29.789632   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.789778   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:29.789895   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:29.790113   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:29.790272   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:29.790287   29384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:24:30.022419   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:24:30.022446   29384 main.go:141] libmachine: Checking connection to Docker...
	I0827 22:24:30.022456   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetURL
	I0827 22:24:30.023886   29384 main.go:141] libmachine: (ha-158602-m03) DBG | Using libvirt version 6000000
	I0827 22:24:30.025890   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.026243   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.026274   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.026403   29384 main.go:141] libmachine: Docker is up and running!
	I0827 22:24:30.026416   29384 main.go:141] libmachine: Reticulating splines...
	I0827 22:24:30.026424   29384 client.go:171] duration metric: took 25.135406733s to LocalClient.Create
	I0827 22:24:30.026449   29384 start.go:167] duration metric: took 25.135470642s to libmachine.API.Create "ha-158602"
	I0827 22:24:30.026463   29384 start.go:293] postStartSetup for "ha-158602-m03" (driver="kvm2")
	I0827 22:24:30.026479   29384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:24:30.026500   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.026761   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:24:30.026784   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.028978   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.029305   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.029328   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.029461   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.029658   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.029828   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.029990   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.110331   29384 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:24:30.114683   29384 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:24:30.114715   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:24:30.114804   29384 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:24:30.114918   29384 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:24:30.114931   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:24:30.115046   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:24:30.124148   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:24:30.146865   29384 start.go:296] duration metric: took 120.387267ms for postStartSetup
	I0827 22:24:30.146917   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetConfigRaw
	I0827 22:24:30.147629   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:30.150260   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.150677   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.150705   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.150927   29384 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:24:30.151114   29384 start.go:128] duration metric: took 25.278483191s to createHost
	I0827 22:24:30.151134   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.153331   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.153665   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.153693   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.153848   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.154038   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.154211   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.154330   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.154480   29384 main.go:141] libmachine: Using SSH client type: native
	I0827 22:24:30.154629   29384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.91 22 <nil> <nil>}
	I0827 22:24:30.154639   29384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:24:30.253676   29384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724797470.232935843
	
	I0827 22:24:30.253701   29384 fix.go:216] guest clock: 1724797470.232935843
	I0827 22:24:30.253712   29384 fix.go:229] Guest: 2024-08-27 22:24:30.232935843 +0000 UTC Remote: 2024-08-27 22:24:30.151124995 +0000 UTC m=+144.459299351 (delta=81.810848ms)
	I0827 22:24:30.253736   29384 fix.go:200] guest clock delta is within tolerance: 81.810848ms
	I0827 22:24:30.253744   29384 start.go:83] releasing machines lock for "ha-158602-m03", held for 25.381228219s
	I0827 22:24:30.253774   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.254044   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:30.257885   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.258339   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.258411   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.260666   29384 out.go:177] * Found network options:
	I0827 22:24:30.261992   29384 out.go:177]   - NO_PROXY=192.168.39.77,192.168.39.142
	W0827 22:24:30.263273   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0827 22:24:30.263300   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:24:30.263318   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.263878   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.264062   29384 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:24:30.264147   29384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:24:30.264192   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	W0827 22:24:30.264267   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	W0827 22:24:30.264290   29384 proxy.go:119] fail to check proxy env: Error ip not in block
	I0827 22:24:30.264347   29384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:24:30.264363   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:24:30.267160   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267307   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267579   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.267605   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267782   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.267948   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:30.267971   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:30.267972   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.268133   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:24:30.268191   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.268298   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:24:30.268385   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.268448   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:24:30.268604   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:24:30.493925   29384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:24:30.500126   29384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:24:30.500179   29384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:24:30.515978   29384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 22:24:30.515999   29384 start.go:495] detecting cgroup driver to use...
	I0827 22:24:30.516069   29384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:24:30.532827   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:24:30.551267   29384 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:24:30.551335   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:24:30.564779   29384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:24:30.578641   29384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:24:30.699297   29384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:24:30.868373   29384 docker.go:233] disabling docker service ...
	I0827 22:24:30.868443   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:24:30.882109   29384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:24:30.894160   29384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:24:31.007677   29384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:24:31.132026   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:24:31.145973   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:24:31.164500   29384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:24:31.164567   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.174821   29384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:24:31.174880   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.184755   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.195049   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.205076   29384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:24:31.216111   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.225938   29384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.242393   29384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:24:31.252457   29384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:24:31.261503   29384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 22:24:31.261564   29384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 22:24:31.274618   29384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:24:31.284766   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:31.408223   29384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:24:31.498819   29384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:24:31.498885   29384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:24:31.503305   29384 start.go:563] Will wait 60s for crictl version
	I0827 22:24:31.503341   29384 ssh_runner.go:195] Run: which crictl
	I0827 22:24:31.506812   29384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:24:31.546189   29384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:24:31.546268   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:24:31.576994   29384 ssh_runner.go:195] Run: crio --version
	I0827 22:24:31.604550   29384 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:24:31.605653   29384 out.go:177]   - env NO_PROXY=192.168.39.77
	I0827 22:24:31.606714   29384 out.go:177]   - env NO_PROXY=192.168.39.77,192.168.39.142
	I0827 22:24:31.608035   29384 main.go:141] libmachine: (ha-158602-m03) Calling .GetIP
	I0827 22:24:31.611599   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:31.612059   29384 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:24:31.612084   29384 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:24:31.612285   29384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:24:31.616326   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:24:31.628845   29384 mustload.go:65] Loading cluster: ha-158602
	I0827 22:24:31.629094   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:31.629335   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:31.629369   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:31.643988   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0827 22:24:31.644501   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:31.645013   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:31.645027   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:31.645366   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:31.645542   29384 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:24:31.646891   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:24:31.647169   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:31.647210   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:31.663133   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0827 22:24:31.663491   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:31.663934   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:31.663954   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:31.664237   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:31.664416   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:24:31.664592   29384 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.91
	I0827 22:24:31.664605   29384 certs.go:194] generating shared ca certs ...
	I0827 22:24:31.664626   29384 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.664752   29384 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:24:31.664812   29384 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:24:31.664826   29384 certs.go:256] generating profile certs ...
	I0827 22:24:31.664919   29384 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:24:31.664951   29384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387
	I0827 22:24:31.664973   29384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.91 192.168.39.254]
	I0827 22:24:31.826242   29384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 ...
	I0827 22:24:31.826270   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387: {Name:mkc02f69cd5a3b130232a3c673e047eaa95570fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.826430   29384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387 ...
	I0827 22:24:31.826442   29384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387: {Name:mkd84ac9539a4b0a8e9556967b7d93a1480590fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:24:31.826507   29384 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.eb642387 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:24:31.826646   29384 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.eb642387 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:24:31.826765   29384 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:24:31.826781   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:24:31.826794   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:24:31.826805   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:24:31.826819   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:24:31.826831   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:24:31.826843   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:24:31.826855   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:24:31.826866   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:24:31.826909   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:24:31.826934   29384 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:24:31.826943   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:24:31.826966   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:24:31.826987   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:24:31.827007   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:24:31.827043   29384 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:24:31.827069   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:24:31.827083   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:31.827095   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:24:31.827126   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:24:31.830162   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:31.830639   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:24:31.830667   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:31.830891   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:24:31.831106   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:24:31.831277   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:24:31.831466   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:24:31.908859   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0827 22:24:31.913741   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0827 22:24:31.928310   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0827 22:24:31.932553   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0827 22:24:31.943592   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0827 22:24:31.947657   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0827 22:24:31.957815   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0827 22:24:31.961448   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0827 22:24:31.971381   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0827 22:24:31.975451   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0827 22:24:31.984717   29384 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0827 22:24:31.988487   29384 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0827 22:24:32.000232   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:24:32.023455   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:24:32.046362   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:24:32.068702   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:24:32.090208   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0827 22:24:32.113468   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 22:24:32.137053   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:24:32.160293   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:24:32.183753   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:24:32.205646   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:24:32.227241   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:24:32.249455   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0827 22:24:32.264950   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0827 22:24:32.280275   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0827 22:24:32.295688   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0827 22:24:32.310808   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0827 22:24:32.326689   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0827 22:24:32.342889   29384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0827 22:24:32.359085   29384 ssh_runner.go:195] Run: openssl version
	I0827 22:24:32.364803   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:24:32.375530   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.380385   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.380454   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:24:32.386694   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:24:32.397159   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:24:32.407586   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.411606   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.411664   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:24:32.416828   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:24:32.427230   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:24:32.437521   29384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.441687   29384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.441739   29384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:24:32.446918   29384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:24:32.457231   29384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:24:32.460934   29384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:24:32.460982   29384 kubeadm.go:934] updating node {m03 192.168.39.91 8443 v1.31.0 crio true true} ...
	I0827 22:24:32.461053   29384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:24:32.461077   29384 kube-vip.go:115] generating kube-vip config ...
	I0827 22:24:32.461109   29384 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:24:32.479250   29384 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:24:32.479323   29384 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:24:32.479384   29384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:24:32.488896   29384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0827 22:24:32.488963   29384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0827 22:24:32.498043   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0827 22:24:32.498065   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0827 22:24:32.498072   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:24:32.498079   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:24:32.498141   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0827 22:24:32.498143   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0827 22:24:32.498043   29384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0827 22:24:32.498272   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:24:32.505434   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0827 22:24:32.505469   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0827 22:24:32.505516   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0827 22:24:32.505545   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0827 22:24:32.534867   29384 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:24:32.534982   29384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0827 22:24:32.615035   29384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0827 22:24:32.615070   29384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0827 22:24:33.364326   29384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0827 22:24:33.373755   29384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0827 22:24:33.392011   29384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:24:33.407687   29384 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:24:33.423475   29384 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:24:33.427162   29384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:24:33.438995   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:33.574498   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:24:33.593760   29384 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:24:33.594113   29384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:24:33.594163   29384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:24:33.610086   29384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0827 22:24:33.610556   29384 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:24:33.611079   29384 main.go:141] libmachine: Using API Version  1
	I0827 22:24:33.611104   29384 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:24:33.611464   29384 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:24:33.611705   29384 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:24:33.611879   29384 start.go:317] joinCluster: &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:24:33.612032   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0827 22:24:33.612052   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:24:33.614979   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:33.615446   29384 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:24:33.615480   29384 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:24:33.615607   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:24:33.615793   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:24:33.615971   29384 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:24:33.616122   29384 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:24:33.772325   29384 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:33.772384   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vkba7b.6o5mdwymayp2q8ew --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m03 --control-plane --apiserver-advertise-address=192.168.39.91 --apiserver-bind-port=8443"
	I0827 22:24:57.256656   29384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vkba7b.6o5mdwymayp2q8ew --discovery-token-ca-cert-hash sha256:cca8b55451f4d8c8d8931604765f1b8db320a5ab852018d2945aca127adb7c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-158602-m03 --control-plane --apiserver-advertise-address=192.168.39.91 --apiserver-bind-port=8443": (23.484245395s)
	I0827 22:24:57.256693   29384 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0827 22:24:57.833197   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-158602-m03 minikube.k8s.io/updated_at=2024_08_27T22_24_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=ha-158602 minikube.k8s.io/primary=false
	I0827 22:24:57.980079   29384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-158602-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0827 22:24:58.093149   29384 start.go:319] duration metric: took 24.481266634s to joinCluster
	I0827 22:24:58.093232   29384 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 22:24:58.093529   29384 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:24:58.094736   29384 out.go:177] * Verifying Kubernetes components...
	I0827 22:24:58.095953   29384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:24:58.323812   29384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:24:58.340373   29384 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:24:58.340720   29384 kapi.go:59] client config for ha-158602: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0827 22:24:58.340780   29384 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.77:8443
	I0827 22:24:58.341049   29384 node_ready.go:35] waiting up to 6m0s for node "ha-158602-m03" to be "Ready" ...
	I0827 22:24:58.341135   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:58.341145   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:58.341156   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:58.341164   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:58.344828   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:58.841182   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:58.841220   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:58.841230   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:58.841238   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:58.844898   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:24:59.341770   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:59.341794   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:59.341804   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:59.341809   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:59.365046   29384 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0827 22:24:59.841468   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:24:59.841492   29384 round_trippers.go:469] Request Headers:
	I0827 22:24:59.841502   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:24:59.841508   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:24:59.844958   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:00.341213   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:00.341234   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:00.341242   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:00.341246   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:00.345165   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:00.345780   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:00.841361   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:00.841385   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:00.841397   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:00.841402   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:00.845196   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:01.341738   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:01.341765   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:01.341776   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:01.341790   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:01.347416   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:25:01.842065   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:01.842086   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:01.842094   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:01.842099   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:01.845230   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:02.342005   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:02.342026   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:02.342034   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:02.342040   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:02.345683   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:02.346573   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:02.841885   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:02.841909   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:02.841919   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:02.841923   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:02.845649   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:03.341761   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:03.341782   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:03.341792   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:03.341799   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:03.345961   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:03.841372   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:03.841396   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:03.841404   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:03.841410   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:03.844810   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:04.341710   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:04.341731   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:04.341739   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:04.341743   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:04.350472   29384 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0827 22:25:04.351534   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:04.842285   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:04.842337   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:04.842345   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:04.842350   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:04.845439   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:05.341967   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:05.341995   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:05.342008   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:05.342012   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:05.346630   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:05.841422   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:05.841446   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:05.841457   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:05.841463   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:05.844685   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.341960   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:06.341980   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:06.341988   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:06.341991   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:06.345392   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.842039   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:06.842061   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:06.842069   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:06.842072   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:06.845497   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:06.846198   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:07.341301   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:07.341339   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:07.341351   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:07.341357   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:07.344768   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:07.841623   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:07.841645   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:07.841653   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:07.841658   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:07.845281   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.342263   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:08.342286   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:08.342296   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:08.342301   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:08.346298   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.841710   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:08.841731   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:08.841740   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:08.841745   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:08.845551   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:08.846348   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:09.341358   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:09.341383   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:09.341391   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:09.341394   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:09.349248   29384 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0827 22:25:09.841489   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:09.841512   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:09.841520   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:09.841523   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:09.844713   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:10.341500   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:10.341530   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:10.341542   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:10.341550   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:10.344750   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:10.841338   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:10.841357   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:10.841365   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:10.841375   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:10.844194   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:11.341627   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:11.341665   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:11.341673   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:11.341678   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:11.344879   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:11.345463   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:11.841742   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:11.841764   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:11.841772   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:11.841776   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:11.845181   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:12.342112   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:12.342134   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:12.342142   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:12.342147   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:12.345618   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:12.841355   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:12.841389   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:12.841398   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:12.841402   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:12.844955   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:13.341690   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:13.341710   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:13.341720   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:13.341728   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:13.345304   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:13.346042   29384 node_ready.go:53] node "ha-158602-m03" has status "Ready":"False"
	I0827 22:25:13.841770   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:13.841797   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:13.841807   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:13.841813   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:13.846456   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:14.342238   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:14.342266   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.342279   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.342285   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.345294   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.841748   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:14.841775   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.841785   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.841794   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.845143   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:14.845644   29384 node_ready.go:49] node "ha-158602-m03" has status "Ready":"True"
	I0827 22:25:14.845661   29384 node_ready.go:38] duration metric: took 16.50459208s for node "ha-158602-m03" to be "Ready" ...
	I0827 22:25:14.845670   29384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:25:14.845735   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:14.845746   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.845753   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.845758   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.852444   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:25:14.859174   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.859243   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxzgs
	I0827 22:25:14.859252   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.859259   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.859264   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.862125   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.862874   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.862889   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.862897   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.862902   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.865914   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:14.868714   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.868751   29384 pod_ready.go:82] duration metric: took 9.552798ms for pod "coredns-6f6b679f8f-jxzgs" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.868764   29384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.868828   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x6dcd
	I0827 22:25:14.868839   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.868848   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.868852   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.871739   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.872414   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.872429   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.872436   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.872440   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.875080   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.875596   29384 pod_ready.go:93] pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.875612   29384 pod_ready.go:82] duration metric: took 6.840862ms for pod "coredns-6f6b679f8f-x6dcd" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.875621   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.875666   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602
	I0827 22:25:14.875674   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.875680   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.875684   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.878164   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.878647   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:14.878659   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.878666   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.878670   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.881013   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.881460   29384 pod_ready.go:93] pod "etcd-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.881474   29384 pod_ready.go:82] duration metric: took 5.84732ms for pod "etcd-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.881482   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.881526   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m02
	I0827 22:25:14.881533   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.881540   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.881546   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.883856   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.884470   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:14.884486   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:14.884497   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:14.884502   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:14.886933   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:14.887476   29384 pod_ready.go:93] pod "etcd-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:14.887498   29384 pod_ready.go:82] duration metric: took 6.001947ms for pod "etcd-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:14.887512   29384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.041884   29384 request.go:632] Waited for 154.30673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m03
	I0827 22:25:15.041949   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/etcd-ha-158602-m03
	I0827 22:25:15.041954   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.041962   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.041967   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.045115   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.241955   29384 request.go:632] Waited for 196.283508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:15.242027   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:15.242033   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.242043   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.242051   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.245012   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:15.245485   29384 pod_ready.go:93] pod "etcd-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:15.245504   29384 pod_ready.go:82] duration metric: took 357.982788ms for pod "etcd-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.245520   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.442694   29384 request.go:632] Waited for 197.104258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:25:15.442771   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602
	I0827 22:25:15.442777   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.442785   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.442788   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.446249   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.642227   29384 request.go:632] Waited for 195.380269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:15.642281   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:15.642286   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.642293   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.642298   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.646122   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:15.646596   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:15.646615   29384 pod_ready.go:82] duration metric: took 401.087797ms for pod "kube-apiserver-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.646626   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:15.842668   29384 request.go:632] Waited for 195.964234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:25:15.842741   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m02
	I0827 22:25:15.842748   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:15.842759   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:15.842770   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:15.846125   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.042277   29384 request.go:632] Waited for 195.322782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:16.042344   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:16.042350   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.042356   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.042359   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.045670   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.046227   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.046243   29384 pod_ready.go:82] duration metric: took 399.610743ms for pod "kube-apiserver-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.046253   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.242328   29384 request.go:632] Waited for 196.015123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m03
	I0827 22:25:16.242393   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-158602-m03
	I0827 22:25:16.242400   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.242411   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.242418   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.245830   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.442826   29384 request.go:632] Waited for 196.393424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:16.442877   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:16.442882   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.442895   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.442902   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.446118   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.446821   29384 pod_ready.go:93] pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.446848   29384 pod_ready.go:82] duration metric: took 400.588436ms for pod "kube-apiserver-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.446858   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.642040   29384 request.go:632] Waited for 195.123868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:25:16.642123   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602
	I0827 22:25:16.642131   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.642152   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.642159   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.645748   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.842426   29384 request.go:632] Waited for 195.788855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:16.842489   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:16.842496   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:16.842509   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:16.842516   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:16.845834   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:16.846437   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:16.846463   29384 pod_ready.go:82] duration metric: took 399.599593ms for pod "kube-controller-manager-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:16.846473   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.042603   29384 request.go:632] Waited for 196.065274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:25:17.042676   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m02
	I0827 22:25:17.042681   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.042689   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.042695   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.046600   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.242365   29384 request.go:632] Waited for 194.921203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:17.242426   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:17.242433   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.242443   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.242457   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.247186   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:17.247645   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:17.247666   29384 pod_ready.go:82] duration metric: took 401.176595ms for pod "kube-controller-manager-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.247677   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.442798   29384 request.go:632] Waited for 195.05519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m03
	I0827 22:25:17.442861   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-158602-m03
	I0827 22:25:17.442878   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.442886   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.442891   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.446045   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.641881   29384 request.go:632] Waited for 195.274175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:17.641947   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:17.641955   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.641962   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.641970   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.645713   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:17.646253   29384 pod_ready.go:93] pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:17.646275   29384 pod_ready.go:82] duration metric: took 398.590477ms for pod "kube-controller-manager-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.646288   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:17.842295   29384 request.go:632] Waited for 195.928987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:25:17.842380   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5pmrv
	I0827 22:25:17.842387   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:17.842399   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:17.842409   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:17.846008   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.042394   29384 request.go:632] Waited for 195.35937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:18.042462   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:18.042472   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.042484   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.042493   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.046036   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.046624   29384 pod_ready.go:93] pod "kube-proxy-5pmrv" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.046644   29384 pod_ready.go:82] duration metric: took 400.349246ms for pod "kube-proxy-5pmrv" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.046657   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nhjgk" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.242739   29384 request.go:632] Waited for 195.992411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nhjgk
	I0827 22:25:18.242809   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nhjgk
	I0827 22:25:18.242820   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.242833   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.242845   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.245988   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.441820   29384 request.go:632] Waited for 195.243524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:18.441908   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:18.441919   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.441932   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.441938   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.445176   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.445864   29384 pod_ready.go:93] pod "kube-proxy-nhjgk" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.445884   29384 pod_ready.go:82] duration metric: took 399.220525ms for pod "kube-proxy-nhjgk" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.445894   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.642606   29384 request.go:632] Waited for 196.632365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:25:18.642678   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slgmm
	I0827 22:25:18.642690   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.642699   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.642706   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.645890   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.842187   29384 request.go:632] Waited for 195.34412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:18.842261   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:18.842270   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:18.842281   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:18.842286   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:18.845501   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:18.846119   29384 pod_ready.go:93] pod "kube-proxy-slgmm" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:18.846143   29384 pod_ready.go:82] duration metric: took 400.242013ms for pod "kube-proxy-slgmm" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:18.846157   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.042142   29384 request.go:632] Waited for 195.908855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:25:19.042232   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602
	I0827 22:25:19.042251   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.042261   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.042282   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.045495   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.242439   29384 request.go:632] Waited for 196.370297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:19.242501   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602
	I0827 22:25:19.242506   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.242513   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.242516   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.245992   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.246791   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:19.246811   29384 pod_ready.go:82] duration metric: took 400.645957ms for pod "kube-scheduler-ha-158602" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.246826   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.441865   29384 request.go:632] Waited for 194.97253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:25:19.441951   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m02
	I0827 22:25:19.441970   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.441994   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.442003   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.444825   29384 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0827 22:25:19.642767   29384 request.go:632] Waited for 197.281156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:19.642844   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m02
	I0827 22:25:19.642850   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.642857   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.642862   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.646271   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:19.646844   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:19.646867   29384 pod_ready.go:82] duration metric: took 400.028336ms for pod "kube-scheduler-ha-158602-m02" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.646881   29384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:19.842077   29384 request.go:632] Waited for 195.093907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m03
	I0827 22:25:19.842156   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-158602-m03
	I0827 22:25:19.842165   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:19.842176   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:19.842186   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:19.845567   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.042091   29384 request.go:632] Waited for 195.571883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:20.042174   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes/ha-158602-m03
	I0827 22:25:20.042180   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.042187   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.042192   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.045760   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.046425   29384 pod_ready.go:93] pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace has status "Ready":"True"
	I0827 22:25:20.046446   29384 pod_ready.go:82] duration metric: took 399.5556ms for pod "kube-scheduler-ha-158602-m03" in "kube-system" namespace to be "Ready" ...
	I0827 22:25:20.046461   29384 pod_ready.go:39] duration metric: took 5.200779619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:25:20.046481   29384 api_server.go:52] waiting for apiserver process to appear ...
	I0827 22:25:20.046538   29384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:25:20.062650   29384 api_server.go:72] duration metric: took 21.969376334s to wait for apiserver process to appear ...
	I0827 22:25:20.062684   29384 api_server.go:88] waiting for apiserver healthz status ...
	I0827 22:25:20.062704   29384 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0827 22:25:20.068550   29384 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0827 22:25:20.068617   29384 round_trippers.go:463] GET https://192.168.39.77:8443/version
	I0827 22:25:20.068625   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.068634   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.068638   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.069381   29384 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0827 22:25:20.069442   29384 api_server.go:141] control plane version: v1.31.0
	I0827 22:25:20.069452   29384 api_server.go:131] duration metric: took 6.762481ms to wait for apiserver health ...
	I0827 22:25:20.069459   29384 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 22:25:20.242800   29384 request.go:632] Waited for 173.256132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.242854   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.242859   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.242866   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.242872   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.248432   29384 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0827 22:25:20.255400   29384 system_pods.go:59] 24 kube-system pods found
	I0827 22:25:20.255438   29384 system_pods.go:61] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:25:20.255446   29384 system_pods.go:61] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:25:20.255453   29384 system_pods.go:61] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:25:20.255458   29384 system_pods.go:61] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:25:20.255465   29384 system_pods.go:61] "etcd-ha-158602-m03" [03c9965b-f795-4663-aeb5-3814314273ff] Running
	I0827 22:25:20.255470   29384 system_pods.go:61] "kindnet-9wgcl" [e7f9bf39-41d1-4ea2-9778-78aa3e0dd9c2] Running
	I0827 22:25:20.255475   29384 system_pods.go:61] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:25:20.255480   29384 system_pods.go:61] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:25:20.255493   29384 system_pods.go:61] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:25:20.255499   29384 system_pods.go:61] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:25:20.255504   29384 system_pods.go:61] "kube-apiserver-ha-158602-m03" [5b0573ad-9bbc-4ea4-9bbf-f7cd0084a028] Running
	I0827 22:25:20.255509   29384 system_pods.go:61] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:25:20.255514   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:25:20.255518   29384 system_pods.go:61] "kube-controller-manager-ha-158602-m03" [b1bdc020-b729-4576-91f9-7d7055ebabd3] Running
	I0827 22:25:20.255523   29384 system_pods.go:61] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:25:20.255528   29384 system_pods.go:61] "kube-proxy-nhjgk" [f21dff1b-96f0-4ee5-9ad4-524cd4948de1] Running
	I0827 22:25:20.255533   29384 system_pods.go:61] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:25:20.255538   29384 system_pods.go:61] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:25:20.255543   29384 system_pods.go:61] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:25:20.255551   29384 system_pods.go:61] "kube-scheduler-ha-158602-m03" [41ec8f3e-cf73-4447-8e88-1dde3e8d4274] Running
	I0827 22:25:20.255556   29384 system_pods.go:61] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:25:20.255561   29384 system_pods.go:61] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:25:20.255568   29384 system_pods.go:61] "kube-vip-ha-158602-m03" [6fbee1d2-e66b-447a-9f9a-1e477fc0af06] Running
	I0827 22:25:20.255574   29384 system_pods.go:61] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:25:20.255580   29384 system_pods.go:74] duration metric: took 186.113164ms to wait for pod list to return data ...
	I0827 22:25:20.255591   29384 default_sa.go:34] waiting for default service account to be created ...
	I0827 22:25:20.441947   29384 request.go:632] Waited for 186.283914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:25:20.441999   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/default/serviceaccounts
	I0827 22:25:20.442005   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.442013   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.442018   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.446197   29384 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0827 22:25:20.446351   29384 default_sa.go:45] found service account: "default"
	I0827 22:25:20.446369   29384 default_sa.go:55] duration metric: took 190.773407ms for default service account to be created ...
	I0827 22:25:20.446378   29384 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 22:25:20.642703   29384 request.go:632] Waited for 196.239188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.642765   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/namespaces/kube-system/pods
	I0827 22:25:20.642773   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.642783   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.642789   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.649486   29384 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0827 22:25:20.655662   29384 system_pods.go:86] 24 kube-system pods found
	I0827 22:25:20.655689   29384 system_pods.go:89] "coredns-6f6b679f8f-jxzgs" [e0f0b233-f708-42e4-ad45-5a6688b3252e] Running
	I0827 22:25:20.655695   29384 system_pods.go:89] "coredns-6f6b679f8f-x6dcd" [6366bf54-23c5-475c-81a8-a0d9197e7335] Running
	I0827 22:25:20.655699   29384 system_pods.go:89] "etcd-ha-158602" [e008e7f2-bbeb-41ea-9853-324e3906e77f] Running
	I0827 22:25:20.655704   29384 system_pods.go:89] "etcd-ha-158602-m02" [21650a21-fc38-4d58-9ebd-72f1281f29f8] Running
	I0827 22:25:20.655707   29384 system_pods.go:89] "etcd-ha-158602-m03" [03c9965b-f795-4663-aeb5-3814314273ff] Running
	I0827 22:25:20.655710   29384 system_pods.go:89] "kindnet-9wgcl" [e7f9bf39-41d1-4ea2-9778-78aa3e0dd9c2] Running
	I0827 22:25:20.655713   29384 system_pods.go:89] "kindnet-kb84t" [094023b9-ea07-4014-a601-2e2a8b723805] Running
	I0827 22:25:20.655717   29384 system_pods.go:89] "kindnet-zmc6v" [26aceecd-263f-40a6-9fd4-5a537ad78845] Running
	I0827 22:25:20.655721   29384 system_pods.go:89] "kube-apiserver-ha-158602" [a301c7b1-bed4-4f35-b5a1-732b3de2dd5d] Running
	I0827 22:25:20.655726   29384 system_pods.go:89] "kube-apiserver-ha-158602-m02" [f9c48da9-1aba-4645-98e1-5f38a486d56d] Running
	I0827 22:25:20.655731   29384 system_pods.go:89] "kube-apiserver-ha-158602-m03" [5b0573ad-9bbc-4ea4-9bbf-f7cd0084a028] Running
	I0827 22:25:20.655738   29384 system_pods.go:89] "kube-controller-manager-ha-158602" [115ab601-81f5-465e-bb91-aae2d7388dd2] Running
	I0827 22:25:20.655743   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m02" [501fab4f-acec-404d-ac32-7629339cd436] Running
	I0827 22:25:20.655749   29384 system_pods.go:89] "kube-controller-manager-ha-158602-m03" [b1bdc020-b729-4576-91f9-7d7055ebabd3] Running
	I0827 22:25:20.655759   29384 system_pods.go:89] "kube-proxy-5pmrv" [a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00] Running
	I0827 22:25:20.655765   29384 system_pods.go:89] "kube-proxy-nhjgk" [f21dff1b-96f0-4ee5-9ad4-524cd4948de1] Running
	I0827 22:25:20.655774   29384 system_pods.go:89] "kube-proxy-slgmm" [4ad8fb67-440c-46ed-932f-7ef544047e74] Running
	I0827 22:25:20.655779   29384 system_pods.go:89] "kube-scheduler-ha-158602" [f74edf13-ab1c-44ec-87d7-b50a825542c5] Running
	I0827 22:25:20.655782   29384 system_pods.go:89] "kube-scheduler-ha-158602-m02" [7480e703-db16-4698-8963-e4ae89c4e21d] Running
	I0827 22:25:20.655786   29384 system_pods.go:89] "kube-scheduler-ha-158602-m03" [41ec8f3e-cf73-4447-8e88-1dde3e8d4274] Running
	I0827 22:25:20.655790   29384 system_pods.go:89] "kube-vip-ha-158602" [4b2cc362-5e90-4074-a14f-aa3f96f0b5c4] Running
	I0827 22:25:20.655793   29384 system_pods.go:89] "kube-vip-ha-158602-m02" [c05ed3a2-78fc-40ef-bc0d-c1ca2fb414ca] Running
	I0827 22:25:20.655799   29384 system_pods.go:89] "kube-vip-ha-158602-m03" [6fbee1d2-e66b-447a-9f9a-1e477fc0af06] Running
	I0827 22:25:20.655803   29384 system_pods.go:89] "storage-provisioner" [f6442070-e677-44c6-ac72-4b9f8dedc67a] Running
	I0827 22:25:20.655811   29384 system_pods.go:126] duration metric: took 209.428401ms to wait for k8s-apps to be running ...
	I0827 22:25:20.655820   29384 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 22:25:20.655871   29384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:25:20.671528   29384 system_svc.go:56] duration metric: took 15.695486ms WaitForService to wait for kubelet
	I0827 22:25:20.671571   29384 kubeadm.go:582] duration metric: took 22.578302265s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:25:20.671602   29384 node_conditions.go:102] verifying NodePressure condition ...
	I0827 22:25:20.842486   29384 request.go:632] Waited for 170.805433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.77:8443/api/v1/nodes
	I0827 22:25:20.842549   29384 round_trippers.go:463] GET https://192.168.39.77:8443/api/v1/nodes
	I0827 22:25:20.842559   29384 round_trippers.go:469] Request Headers:
	I0827 22:25:20.842570   29384 round_trippers.go:473]     Accept: application/json, */*
	I0827 22:25:20.842580   29384 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0827 22:25:20.846221   29384 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0827 22:25:20.847223   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847251   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847263   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847267   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847271   29384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 22:25:20.847274   29384 node_conditions.go:123] node cpu capacity is 2
	I0827 22:25:20.847278   29384 node_conditions.go:105] duration metric: took 175.670372ms to run NodePressure ...
	I0827 22:25:20.847289   29384 start.go:241] waiting for startup goroutines ...
	I0827 22:25:20.847308   29384 start.go:255] writing updated cluster config ...
	I0827 22:25:20.847633   29384 ssh_runner.go:195] Run: rm -f paused
	I0827 22:25:20.898987   29384 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 22:25:20.901075   29384 out.go:177] * Done! kubectl is now configured to use "ha-158602" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.699712406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797789699687919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c41281c5-3233-4362-bf55-df7d63823b2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.700189007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f77da79-b8dc-437b-80b2-fef639fa2e5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.700239415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f77da79-b8dc-437b-80b2-fef639fa2e5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.700532040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f77da79-b8dc-437b-80b2-fef639fa2e5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.734719876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd8aba2e-d0b0-4352-b751-8764767dc52b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.734791157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd8aba2e-d0b0-4352-b751-8764767dc52b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.736240297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7437255c-ca2e-40ff-a2b9-23e9aa560ad9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.736727730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797789736703617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7437255c-ca2e-40ff-a2b9-23e9aa560ad9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.737210714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=540c76e5-2f8f-4887-8bbd-04bce147abfb name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.737260768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=540c76e5-2f8f-4887-8bbd-04bce147abfb name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.737540460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=540c76e5-2f8f-4887-8bbd-04bce147abfb name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.772849640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e7eeb67-9c72-4a0b-955c-be6240afba08 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.772923201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e7eeb67-9c72-4a0b-955c-be6240afba08 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.775037723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=723dd0cc-9d44-4214-befe-f4caf4f69863 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.776256000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797789776218762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=723dd0cc-9d44-4214-befe-f4caf4f69863 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.776929965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4256680d-c247-458d-92de-adc73a1e4302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.776984139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4256680d-c247-458d-92de-adc73a1e4302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.777221304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4256680d-c247-458d-92de-adc73a1e4302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.817983882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e82e8863-7994-4548-a3f8-75ceef4099ae name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.818056987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e82e8863-7994-4548-a3f8-75ceef4099ae name=/runtime.v1.RuntimeService/Version
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.819708347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06bb18ab-21a6-4035-b17c-68ae432a97cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.820148142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797789820123467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06bb18ab-21a6-4035-b17c-68ae432a97cc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.820664688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a889bbd7-bd9a-4976-8fe1-388985d28628 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.820948184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a889bbd7-bd9a-4976-8fe1-388985d28628 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:29:49 ha-158602 crio[666]: time="2024-08-27 22:29:49.821217050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724797524660197646,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386256295338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d999c4b0e96da5fc99ae584c7282e3595ba8276464b128f30a3a2a1bfdc9764,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724797386169781284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724797386182420777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f7
08-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724797374248087461,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172479737
0715598493,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f,PodSandboxId:c714036efe6860b438a79d2ca173ab448b934564ca89cc65a252fa018f11dece,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172479736169
2848356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 858dc6c96baac7b79ad32a72938d152d,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724797359512571470,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280,PodSandboxId:807fa831db17bc12f0aaa13b5da7c2a2a0eeb00351026ed861290d3614f8c18e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724797359490985850,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d,PodSandboxId:ec7216a9fc947c72e07e3d9b2eac1514e726ca7b47e19dcc86dae8a41f5f3a61,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724797359468712499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724797359456149549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a889bbd7-bd9a-4976-8fe1-388985d28628 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6577993a571ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4f329cad0ee8c       busybox-7dff88458-gxvsc
	70a0959d7fc34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   922e19e19e6b3       coredns-6f6b679f8f-x6dcd
	c1556743f3ed7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7e95e9aaf3336       coredns-6f6b679f8f-jxzgs
	4d999c4b0e96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ffbe4fc48196e       storage-provisioner
	9006fd58dfc63       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   d113f6cede364       kindnet-kb84t
	79ea4c0053fb1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   240775e6cca6c       kube-proxy-5pmrv
	a18851305e21f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c714036efe686       kube-vip-ha-158602
	eb6e08e1cf880       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   71d74ecb9f300       etcd-ha-158602
	961aabfc8401a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   807fa831db17b       kube-controller-manager-ha-158602
	ad2032c0ac674       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   ec7216a9fc947       kube-apiserver-ha-158602
	60feae8b5d1f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   5e03fa37bf662       kube-scheduler-ha-158602
	
	
	==> coredns [70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d] <==
	[INFO] 10.244.1.2:58445 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003689264s
	[INFO] 10.244.1.2:40506 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145422s
	[INFO] 10.244.0.4:39982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136663s
	[INFO] 10.244.0.4:43032 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001634431s
	[INFO] 10.244.0.4:57056 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135477s
	[INFO] 10.244.0.4:60425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128187s
	[INFO] 10.244.0.4:33910 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092983s
	[INFO] 10.244.2.2:55029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617414s
	[INFO] 10.244.2.2:43643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085283s
	[INFO] 10.244.2.2:33596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116719s
	[INFO] 10.244.1.2:36406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011994s
	[INFO] 10.244.1.2:45944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072161s
	[INFO] 10.244.0.4:34595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083932s
	[INFO] 10.244.0.4:56369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051489s
	[INFO] 10.244.0.4:45069 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052963s
	[INFO] 10.244.2.2:41980 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118063s
	[INFO] 10.244.1.2:35610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170436s
	[INFO] 10.244.1.2:39033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193301s
	[INFO] 10.244.1.2:58078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123451s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128271s
	[INFO] 10.244.0.4:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010237s
	[INFO] 10.244.0.4:58359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080338s
	[INFO] 10.244.2.2:35482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009539s
	[INFO] 10.244.2.2:45798 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087557s
	[INFO] 10.244.2.2:39340 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090317s
	
	
	==> coredns [c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3] <==
	[INFO] 10.244.1.2:46115 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011856798s
	[INFO] 10.244.0.4:48603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221468s
	[INFO] 10.244.0.4:42021 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000076185s
	[INFO] 10.244.1.2:49292 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012663s
	[INFO] 10.244.1.2:34885 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226601s
	[INFO] 10.244.1.2:54874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014649s
	[INFO] 10.244.1.2:34031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187993s
	[INFO] 10.244.1.2:39560 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019907s
	[INFO] 10.244.0.4:43688 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012926s
	[INFO] 10.244.0.4:51548 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001519492s
	[INFO] 10.244.0.4:58561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052435s
	[INFO] 10.244.2.2:48091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180149s
	[INFO] 10.244.2.2:45077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104198s
	[INFO] 10.244.2.2:41789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215182s
	[INFO] 10.244.2.2:52731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064319s
	[INFO] 10.244.2.2:43957 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126173s
	[INFO] 10.244.1.2:55420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084801s
	[INFO] 10.244.1.2:45306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059642s
	[INFO] 10.244.0.4:46103 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117802s
	[INFO] 10.244.2.2:39675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191879s
	[INFO] 10.244.2.2:43022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100522s
	[INFO] 10.244.2.2:53360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093376s
	[INFO] 10.244.0.4:36426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132899s
	[INFO] 10.244.0.4:42082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000167434s
	[INFO] 10.244.2.2:36926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139785s
	
	
	==> describe nodes <==
	Name:               ha-158602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:22:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:29:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:25:48 +0000   Tue, 27 Aug 2024 22:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-158602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f393f25de7274e45b62eb7b988ece32c
	  System UUID:                f393f25d-e727-4e45-b62e-b7b988ece32c
	  Boot ID:                    a1b3c582-a6fa-4ddf-91a6-fe921f43a40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxvsc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 coredns-6f6b679f8f-jxzgs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m
	  kube-system                 coredns-6f6b679f8f-x6dcd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m
	  kube-system                 etcd-ha-158602                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m5s
	  kube-system                 kindnet-kb84t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m
	  kube-system                 kube-apiserver-ha-158602             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-controller-manager-ha-158602    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-proxy-5pmrv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 kube-scheduler-ha-158602             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-vip-ha-158602                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m59s  kube-proxy       
	  Normal  Starting                 7m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m5s   kubelet          Node ha-158602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m5s   kubelet          Node ha-158602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m5s   kubelet          Node ha-158602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m1s   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal  NodeReady                6m45s  kubelet          Node ha-158602 status is now: NodeReady
	  Normal  RegisteredNode           6m3s   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	
	
	Name:               ha-158602-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:23:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:26:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 27 Aug 2024 22:25:41 +0000   Tue, 27 Aug 2024 22:27:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    ha-158602-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b63e2f54de44a9e8ad7eb0ee8626bfb
	  System UUID:                1b63e2f5-4de4-4a9e-8ad7-eb0ee8626bfb
	  Boot ID:                    de317c2d-f8b8-42bc-8e7c-1542b778172c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-crtgh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-ha-158602-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m9s
	  kube-system                 kindnet-zmc6v                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-158602-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-158602-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-slgmm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-158602-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-158602-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x7 over 6m11s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  NodeNotReady             2m37s                  node-controller  Node ha-158602-m02 status is now: NodeNotReady
	
	
	Name:               ha-158602-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_24_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:29:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:25:56 +0000   Tue, 27 Aug 2024 22:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-158602-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d02faebd780a49dd8e6ae91df2852b5e
	  System UUID:                d02faebd-780a-49dd-8e6a-e91df2852b5e
	  Boot ID:                    5fda21c4-296f-4b36-bb5f-5f3dc48345cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmcwr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-ha-158602-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m54s
	  kube-system                 kindnet-9wgcl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m56s
	  kube-system                 kube-apiserver-ha-158602-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-ha-158602-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-nhjgk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-ha-158602-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-vip-ha-158602-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-158602-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	
	
	Name:               ha-158602-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:25:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:29:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:25:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:26:28 +0000   Tue, 27 Aug 2024 22:26:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-158602-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad10535aaed444b79090a76efa3929c7
	  System UUID:                ad10535a-aed4-44b7-9090-a76efa3929c7
	  Boot ID:                    a9c768c5-396c-462d-ba6b-654fe7bbf53a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6szl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m53s
	  kube-system                 kube-proxy-658sj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m53s (x2 over 3m53s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x2 over 3m53s)  kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x2 over 3m53s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal  NodeReady                3m32s                  kubelet          Node ha-158602-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug27 22:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.699214] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.788922] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.878829] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.217797] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053782] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198923] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125102] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.284457] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.718918] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.171591] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.060183] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.161491] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.086175] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.529529] kauditd_printk_skb: 21 callbacks suppressed
	[Aug27 22:23] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.211142] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff] <==
	{"level":"warn","ts":"2024-08-27T22:29:49.953590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.053307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.076515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.083973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.087634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.098032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.104294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.110031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.113055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.115953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.121221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.128092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.134311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.137767Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.141131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.147773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.153131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.153306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.158619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.168950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.172187Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.175574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.183604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.190221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:29:50.253165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"928ccad376a03472","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:29:50 up 7 min,  0 users,  load average: 0.18, 0.21, 0.11
	Linux ha-158602 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03] <==
	I0827 22:29:15.272268       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:29:25.272029       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:29:25.272075       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:29:25.272219       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:29:25.272244       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:29:25.272301       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:29:25.272318       1 main.go:299] handling current node
	I0827 22:29:25.272334       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:29:25.272338       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:29:35.269176       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:29:35.269277       1 main.go:299] handling current node
	I0827 22:29:35.269310       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:29:35.269329       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:29:35.269566       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:29:35.269599       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:29:35.269677       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:29:35.269695       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:29:45.264282       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:29:45.264340       1 main.go:299] handling current node
	I0827 22:29:45.264375       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:29:45.264380       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:29:45.264574       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:29:45.264593       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:29:45.264643       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:29:45.264661       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d] <==
	I0827 22:22:44.133834       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0827 22:22:44.141362       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77]
	I0827 22:22:44.142528       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:22:44.152024       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 22:22:44.442387       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:22:45.179875       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:22:45.199550       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0827 22:22:45.352262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:22:49.950652       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0827 22:22:50.044540       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0827 22:25:26.119981       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55998: use of closed network connection
	E0827 22:25:26.309740       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56020: use of closed network connection
	E0827 22:25:26.491976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56030: use of closed network connection
	E0827 22:25:26.683907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56056: use of closed network connection
	E0827 22:25:26.865254       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56076: use of closed network connection
	E0827 22:25:27.040369       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56092: use of closed network connection
	E0827 22:25:27.215029       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56118: use of closed network connection
	E0827 22:25:27.386248       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56142: use of closed network connection
	E0827 22:25:27.554363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56158: use of closed network connection
	E0827 22:25:27.831918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56182: use of closed network connection
	E0827 22:25:28.002684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56192: use of closed network connection
	E0827 22:25:28.182151       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56214: use of closed network connection
	E0827 22:25:28.347892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56228: use of closed network connection
	E0827 22:25:28.521779       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56246: use of closed network connection
	E0827 22:25:28.679982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56270: use of closed network connection
	
	
	==> kube-controller-manager [961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280] <==
	I0827 22:25:57.734085       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-158602-m04" podCIDRs=["10.244.3.0/24"]
	I0827 22:25:57.734219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.734326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.752966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:57.905629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:58.087759       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:58.285593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:59.357632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:25:59.358025       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-158602-m04"
	I0827 22:25:59.478306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:02.256969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:02.304607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:07.754974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:18.322634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:18.322793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-158602-m04"
	I0827 22:26:18.338119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:19.375837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:26:28.373575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:27:13.048965       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-158602-m04"
	I0827 22:27:13.049288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:13.073556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:13.083138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.578896ms"
	I0827 22:27:13.083763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.113µs"
	I0827 22:27:14.451256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:27:18.293809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	
	
	==> kube-proxy [79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:22:51.012541       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:22:51.029420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0827 22:22:51.029562       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:22:51.070953       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:22:51.071047       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:22:51.071093       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:22:51.073377       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:22:51.073729       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:22:51.073783       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:22:51.075157       1 config.go:197] "Starting service config controller"
	I0827 22:22:51.075295       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:22:51.075385       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:22:51.075407       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:22:51.077356       1 config.go:326] "Starting node config controller"
	I0827 22:22:51.077397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:22:51.175473       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:22:51.175537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:22:51.177574       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f] <==
	E0827 22:22:43.457140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.515725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.515773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.607309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.607360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.612593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.612665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.690576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0827 22:22:43.690707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.708822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 22:22:43.708922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.744560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 22:22:43.744717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.769502       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 22:22:43.769600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.826851       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:22:43.828024       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:22:46.441310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:25:57.773909       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.774761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-658sj"
	I0827 22:25:57.775154       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.831035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:25:57.831164       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f48452c-8a4b-403b-9da9-90f2dab5ec70(kube-system/kube-proxy-d6zj9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d6zj9"
	E0827 22:25:57.831230       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-d6zj9"
	I0827 22:25:57.831281       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	
	
	==> kubelet <==
	Aug 27 22:28:35 ha-158602 kubelet[1308]: E0827 22:28:35.443889    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797715443357537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.363880    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:28:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:28:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.446063    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797725445700751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:45 ha-158602 kubelet[1308]: E0827 22:28:45.446089    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797725445700751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:55 ha-158602 kubelet[1308]: E0827 22:28:55.448223    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797735447721130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:28:55 ha-158602 kubelet[1308]: E0827 22:28:55.448279    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797735447721130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:05 ha-158602 kubelet[1308]: E0827 22:29:05.450011    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797745449553119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:05 ha-158602 kubelet[1308]: E0827 22:29:05.450362    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797745449553119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:15 ha-158602 kubelet[1308]: E0827 22:29:15.452483    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797755451903030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:15 ha-158602 kubelet[1308]: E0827 22:29:15.453048    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797755451903030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:25 ha-158602 kubelet[1308]: E0827 22:29:25.455260    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797765454876714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:25 ha-158602 kubelet[1308]: E0827 22:29:25.455310    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797765454876714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:35 ha-158602 kubelet[1308]: E0827 22:29:35.456832    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797775456509028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:35 ha-158602 kubelet[1308]: E0827 22:29:35.457138    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797775456509028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:45 ha-158602 kubelet[1308]: E0827 22:29:45.361781    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:29:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:29:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:29:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:29:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:29:45 ha-158602 kubelet[1308]: E0827 22:29:45.458956    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797785458572009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:29:45 ha-158602 kubelet[1308]: E0827 22:29:45.459037    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724797785458572009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-158602 -n ha-158602
helpers_test.go:261: (dbg) Run:  kubectl --context ha-158602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (47.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (348.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-158602 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-158602 -v=7 --alsologtostderr
E0827 22:31:21.248375   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:31:48.950616   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-158602 -v=7 --alsologtostderr: exit status 82 (2m1.757767663s)

                                                
                                                
-- stdout --
	* Stopping node "ha-158602-m04"  ...
	* Stopping node "ha-158602-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:29:51.626382   35098 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:29:51.626633   35098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:51.626643   35098 out.go:358] Setting ErrFile to fd 2...
	I0827 22:29:51.626647   35098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:29:51.626804   35098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:29:51.627010   35098 out.go:352] Setting JSON to false
	I0827 22:29:51.627122   35098 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:51.627491   35098 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:51.627576   35098 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:29:51.627741   35098 mustload.go:65] Loading cluster: ha-158602
	I0827 22:29:51.627878   35098 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:29:51.627901   35098 stop.go:39] StopHost: ha-158602-m04
	I0827 22:29:51.628281   35098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:51.628350   35098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:51.643741   35098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0827 22:29:51.644153   35098 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:51.644659   35098 main.go:141] libmachine: Using API Version  1
	I0827 22:29:51.644686   35098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:51.645039   35098 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:51.647560   35098 out.go:177] * Stopping node "ha-158602-m04"  ...
	I0827 22:29:51.648834   35098 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 22:29:51.648870   35098 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:29:51.649080   35098 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 22:29:51.649104   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:29:51.652168   35098 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:51.652601   35098 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:25:43 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:29:51.652636   35098 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:29:51.652874   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:29:51.653027   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:29:51.653198   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:29:51.653352   35098 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:29:51.734655   35098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0827 22:29:51.786143   35098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0827 22:29:51.838380   35098 main.go:141] libmachine: Stopping "ha-158602-m04"...
	I0827 22:29:51.838403   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:51.840079   35098 main.go:141] libmachine: (ha-158602-m04) Calling .Stop
	I0827 22:29:51.843405   35098 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 0/120
	I0827 22:29:52.915493   35098 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:29:52.916796   35098 main.go:141] libmachine: Machine "ha-158602-m04" was stopped.
	I0827 22:29:52.916815   35098 stop.go:75] duration metric: took 1.267983931s to stop
	I0827 22:29:52.916844   35098 stop.go:39] StopHost: ha-158602-m03
	I0827 22:29:52.917152   35098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:29:52.917232   35098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:29:52.932414   35098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0827 22:29:52.932853   35098 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:29:52.933281   35098 main.go:141] libmachine: Using API Version  1
	I0827 22:29:52.933323   35098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:29:52.933684   35098 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:29:52.935984   35098 out.go:177] * Stopping node "ha-158602-m03"  ...
	I0827 22:29:52.938013   35098 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 22:29:52.938039   35098 main.go:141] libmachine: (ha-158602-m03) Calling .DriverName
	I0827 22:29:52.938267   35098 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 22:29:52.938288   35098 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHHostname
	I0827 22:29:52.940994   35098 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:52.941508   35098 main.go:141] libmachine: (ha-158602-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:4d:2e", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:24:19 +0000 UTC Type:0 Mac:52:54:00:5e:4d:2e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:ha-158602-m03 Clientid:01:52:54:00:5e:4d:2e}
	I0827 22:29:52.941546   35098 main.go:141] libmachine: (ha-158602-m03) DBG | domain ha-158602-m03 has defined IP address 192.168.39.91 and MAC address 52:54:00:5e:4d:2e in network mk-ha-158602
	I0827 22:29:52.941620   35098 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHPort
	I0827 22:29:52.941794   35098 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHKeyPath
	I0827 22:29:52.941941   35098 main.go:141] libmachine: (ha-158602-m03) Calling .GetSSHUsername
	I0827 22:29:52.942071   35098 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m03/id_rsa Username:docker}
	I0827 22:29:53.023243   35098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0827 22:29:53.076012   35098 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0827 22:29:53.128884   35098 main.go:141] libmachine: Stopping "ha-158602-m03"...
	I0827 22:29:53.128910   35098 main.go:141] libmachine: (ha-158602-m03) Calling .GetState
	I0827 22:29:53.130447   35098 main.go:141] libmachine: (ha-158602-m03) Calling .Stop
	I0827 22:29:53.133866   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 0/120
	I0827 22:29:54.135532   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 1/120
	I0827 22:29:55.137391   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 2/120
	I0827 22:29:56.138978   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 3/120
	I0827 22:29:57.140262   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 4/120
	I0827 22:29:58.142120   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 5/120
	I0827 22:29:59.143643   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 6/120
	I0827 22:30:00.145592   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 7/120
	I0827 22:30:01.147181   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 8/120
	I0827 22:30:02.148756   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 9/120
	I0827 22:30:03.150463   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 10/120
	I0827 22:30:04.151964   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 11/120
	I0827 22:30:05.153324   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 12/120
	I0827 22:30:06.155086   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 13/120
	I0827 22:30:07.156555   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 14/120
	I0827 22:30:08.158533   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 15/120
	I0827 22:30:09.160252   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 16/120
	I0827 22:30:10.161765   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 17/120
	I0827 22:30:11.163307   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 18/120
	I0827 22:30:12.165139   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 19/120
	I0827 22:30:13.166978   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 20/120
	I0827 22:30:14.168643   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 21/120
	I0827 22:30:15.170056   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 22/120
	I0827 22:30:16.171906   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 23/120
	I0827 22:30:17.173428   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 24/120
	I0827 22:30:18.175278   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 25/120
	I0827 22:30:19.177194   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 26/120
	I0827 22:30:20.178795   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 27/120
	I0827 22:30:21.181463   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 28/120
	I0827 22:30:22.182858   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 29/120
	I0827 22:30:23.185055   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 30/120
	I0827 22:30:24.186718   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 31/120
	I0827 22:30:25.188330   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 32/120
	I0827 22:30:26.189944   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 33/120
	I0827 22:30:27.191145   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 34/120
	I0827 22:30:28.193350   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 35/120
	I0827 22:30:29.194966   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 36/120
	I0827 22:30:30.196611   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 37/120
	I0827 22:30:31.197975   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 38/120
	I0827 22:30:32.199698   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 39/120
	I0827 22:30:33.201820   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 40/120
	I0827 22:30:34.203439   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 41/120
	I0827 22:30:35.204864   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 42/120
	I0827 22:30:36.206252   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 43/120
	I0827 22:30:37.207923   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 44/120
	I0827 22:30:38.210183   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 45/120
	I0827 22:30:39.211744   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 46/120
	I0827 22:30:40.213163   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 47/120
	I0827 22:30:41.214767   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 48/120
	I0827 22:30:42.216987   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 49/120
	I0827 22:30:43.219204   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 50/120
	I0827 22:30:44.220841   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 51/120
	I0827 22:30:45.222311   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 52/120
	I0827 22:30:46.223969   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 53/120
	I0827 22:30:47.225656   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 54/120
	I0827 22:30:48.228119   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 55/120
	I0827 22:30:49.229619   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 56/120
	I0827 22:30:50.231147   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 57/120
	I0827 22:30:51.232664   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 58/120
	I0827 22:30:52.234446   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 59/120
	I0827 22:30:53.236274   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 60/120
	I0827 22:30:54.238303   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 61/120
	I0827 22:30:55.240513   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 62/120
	I0827 22:30:56.242207   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 63/120
	I0827 22:30:57.243785   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 64/120
	I0827 22:30:58.245591   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 65/120
	I0827 22:30:59.247387   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 66/120
	I0827 22:31:00.248950   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 67/120
	I0827 22:31:01.250953   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 68/120
	I0827 22:31:02.252199   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 69/120
	I0827 22:31:03.253747   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 70/120
	I0827 22:31:04.255189   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 71/120
	I0827 22:31:05.256999   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 72/120
	I0827 22:31:06.258789   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 73/120
	I0827 22:31:07.260424   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 74/120
	I0827 22:31:08.261759   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 75/120
	I0827 22:31:09.263093   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 76/120
	I0827 22:31:10.264633   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 77/120
	I0827 22:31:11.266244   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 78/120
	I0827 22:31:12.267521   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 79/120
	I0827 22:31:13.269146   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 80/120
	I0827 22:31:14.271253   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 81/120
	I0827 22:31:15.272779   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 82/120
	I0827 22:31:16.274924   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 83/120
	I0827 22:31:17.276178   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 84/120
	I0827 22:31:18.278603   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 85/120
	I0827 22:31:19.280134   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 86/120
	I0827 22:31:20.281821   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 87/120
	I0827 22:31:21.284090   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 88/120
	I0827 22:31:22.285417   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 89/120
	I0827 22:31:23.287450   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 90/120
	I0827 22:31:24.288747   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 91/120
	I0827 22:31:25.290218   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 92/120
	I0827 22:31:26.291516   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 93/120
	I0827 22:31:27.293086   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 94/120
	I0827 22:31:28.294471   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 95/120
	I0827 22:31:29.296091   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 96/120
	I0827 22:31:30.297425   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 97/120
	I0827 22:31:31.298945   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 98/120
	I0827 22:31:32.300256   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 99/120
	I0827 22:31:33.302184   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 100/120
	I0827 22:31:34.303595   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 101/120
	I0827 22:31:35.305712   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 102/120
	I0827 22:31:36.307024   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 103/120
	I0827 22:31:37.308424   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 104/120
	I0827 22:31:38.310537   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 105/120
	I0827 22:31:39.311912   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 106/120
	I0827 22:31:40.313219   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 107/120
	I0827 22:31:41.314558   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 108/120
	I0827 22:31:42.315834   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 109/120
	I0827 22:31:43.317767   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 110/120
	I0827 22:31:44.318900   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 111/120
	I0827 22:31:45.320206   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 112/120
	I0827 22:31:46.321597   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 113/120
	I0827 22:31:47.323106   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 114/120
	I0827 22:31:48.325037   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 115/120
	I0827 22:31:49.326377   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 116/120
	I0827 22:31:50.327949   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 117/120
	I0827 22:31:51.329319   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 118/120
	I0827 22:31:52.331432   35098 main.go:141] libmachine: (ha-158602-m03) Waiting for machine to stop 119/120
	I0827 22:31:53.332496   35098 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0827 22:31:53.332582   35098 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0827 22:31:53.334533   35098 out.go:201] 
	W0827 22:31:53.336113   35098 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0827 22:31:53.336130   35098 out.go:270] * 
	* 
	W0827 22:31:53.338278   35098 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 22:31:53.339438   35098 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-158602 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-158602 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-158602 --wait=true -v=7 --alsologtostderr: (3m44.691578466s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-158602
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-158602 -n ha-158602
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-158602 logs -n 25: (1.714740852s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m04 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp testdata/cp-test.txt                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m04_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03:/home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m03 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-158602 node stop m02 -v=7                                                     | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-158602 node start m02 -v=7                                                    | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-158602 -v=7                                                           | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-158602 -v=7                                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-158602 --wait=true -v=7                                                    | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:31 UTC | 27 Aug 24 22:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-158602                                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:35 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:31:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:31:53.389908   35579 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:31:53.390055   35579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:31:53.390066   35579 out.go:358] Setting ErrFile to fd 2...
	I0827 22:31:53.390072   35579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:31:53.390273   35579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:31:53.390921   35579 out.go:352] Setting JSON to false
	I0827 22:31:53.391881   35579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4460,"bootTime":1724793453,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:31:53.391940   35579 start.go:139] virtualization: kvm guest
	I0827 22:31:53.394330   35579 out.go:177] * [ha-158602] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:31:53.395832   35579 notify.go:220] Checking for updates...
	I0827 22:31:53.395860   35579 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:31:53.397279   35579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:31:53.398639   35579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:31:53.399899   35579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:31:53.401011   35579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:31:53.402200   35579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:31:53.403798   35579 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:31:53.403915   35579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:31:53.404557   35579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:31:53.404608   35579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:31:53.421650   35579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0827 22:31:53.422096   35579 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:31:53.422626   35579 main.go:141] libmachine: Using API Version  1
	I0827 22:31:53.422648   35579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:31:53.422990   35579 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:31:53.423179   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.459636   35579 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 22:31:53.461104   35579 start.go:297] selected driver: kvm2
	I0827 22:31:53.461119   35579 start.go:901] validating driver "kvm2" against &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:31:53.461277   35579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:31:53.461600   35579 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:31:53.461696   35579 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:31:53.476675   35579 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:31:53.477305   35579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:31:53.477369   35579 cni.go:84] Creating CNI manager for ""
	I0827 22:31:53.477381   35579 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0827 22:31:53.477434   35579 start.go:340] cluster config:
	{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:31:53.477587   35579 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:31:53.479644   35579 out.go:177] * Starting "ha-158602" primary control-plane node in "ha-158602" cluster
	I0827 22:31:53.480858   35579 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:31:53.480895   35579 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:31:53.480902   35579 cache.go:56] Caching tarball of preloaded images
	I0827 22:31:53.481032   35579 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:31:53.481056   35579 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:31:53.481230   35579 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:31:53.481431   35579 start.go:360] acquireMachinesLock for ha-158602: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:31:53.481473   35579 start.go:364] duration metric: took 23.535µs to acquireMachinesLock for "ha-158602"
	I0827 22:31:53.481485   35579 start.go:96] Skipping create...Using existing machine configuration
	I0827 22:31:53.481493   35579 fix.go:54] fixHost starting: 
	I0827 22:31:53.481797   35579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:31:53.481826   35579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:31:53.495902   35579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0827 22:31:53.496344   35579 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:31:53.496841   35579 main.go:141] libmachine: Using API Version  1
	I0827 22:31:53.496861   35579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:31:53.497140   35579 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:31:53.497318   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.497453   35579 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:31:53.499023   35579 fix.go:112] recreateIfNeeded on ha-158602: state=Running err=<nil>
	W0827 22:31:53.499062   35579 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 22:31:53.500934   35579 out.go:177] * Updating the running kvm2 "ha-158602" VM ...
	I0827 22:31:53.502345   35579 machine.go:93] provisionDockerMachine start ...
	I0827 22:31:53.502366   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.502587   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.505249   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.505724   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.505750   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.505863   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.506019   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.506167   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.506311   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.506493   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.506694   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.506706   35579 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 22:31:53.625309   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:31:53.625341   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.625620   35579 buildroot.go:166] provisioning hostname "ha-158602"
	I0827 22:31:53.625648   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.625844   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.628387   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.628808   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.628850   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.628940   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.629152   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.629317   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.629482   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.629618   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.629822   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.629842   35579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602 && echo "ha-158602" | sudo tee /etc/hostname
	I0827 22:31:53.760240   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:31:53.760271   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.763095   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.763637   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.763659   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.763911   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.764096   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.764259   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.764414   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.764563   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.764780   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.764810   35579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:31:53.876725   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:31:53.876764   35579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:31:53.876813   35579 buildroot.go:174] setting up certificates
	I0827 22:31:53.876826   35579 provision.go:84] configureAuth start
	I0827 22:31:53.876845   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.877109   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:31:53.879667   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.880063   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.880094   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.880260   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.882437   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.882836   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.882860   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.883031   35579 provision.go:143] copyHostCerts
	I0827 22:31:53.883061   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:31:53.883115   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:31:53.883130   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:31:53.883196   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:31:53.883325   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:31:53.883344   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:31:53.883348   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:31:53.883381   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:31:53.883437   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:31:53.883457   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:31:53.883463   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:31:53.883483   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:31:53.883545   35579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602 san=[127.0.0.1 192.168.39.77 ha-158602 localhost minikube]
	I0827 22:31:54.045794   35579 provision.go:177] copyRemoteCerts
	I0827 22:31:54.045857   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:31:54.045881   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:54.048533   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.048899   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:54.048924   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.049093   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:54.049316   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.049501   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:54.049651   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:31:54.136040   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:31:54.136114   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:31:54.161677   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:31:54.161749   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0827 22:31:54.185953   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:31:54.186022   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 22:31:54.209655   35579 provision.go:87] duration metric: took 332.81472ms to configureAuth
	I0827 22:31:54.209677   35579 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:31:54.209907   35579 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:31:54.209982   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:54.212839   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.213254   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:54.213278   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.213502   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:54.213727   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.213913   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.214058   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:54.214313   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:54.214614   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:54.214636   35579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:33:24.984074   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:33:24.984133   35579 machine.go:96] duration metric: took 1m31.481746518s to provisionDockerMachine
	I0827 22:33:24.984148   35579 start.go:293] postStartSetup for "ha-158602" (driver="kvm2")
	I0827 22:33:24.984163   35579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:33:24.984193   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:24.984605   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:33:24.984638   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:24.988420   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:24.988972   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:24.989005   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:24.989180   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:24.989428   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:24.989561   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:24.989683   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.080256   35579 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:33:25.084326   35579 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:33:25.084355   35579 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:33:25.084434   35579 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:33:25.084571   35579 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:33:25.084584   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:33:25.084708   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:33:25.094625   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:33:25.119132   35579 start.go:296] duration metric: took 134.967081ms for postStartSetup
	I0827 22:33:25.119184   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.119511   35579 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0827 22:33:25.119576   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.122295   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.122866   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.122894   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.123167   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.123385   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.123585   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.123713   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	W0827 22:33:25.211411   35579 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0827 22:33:25.211450   35579 fix.go:56] duration metric: took 1m31.729956586s for fixHost
	I0827 22:33:25.211473   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.214397   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.214900   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.214924   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.215197   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.215416   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.215609   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.215799   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.215980   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:33:25.216169   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:33:25.216180   35579 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:33:25.329116   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724798005.285062183
	
	I0827 22:33:25.329135   35579 fix.go:216] guest clock: 1724798005.285062183
	I0827 22:33:25.329142   35579 fix.go:229] Guest: 2024-08-27 22:33:25.285062183 +0000 UTC Remote: 2024-08-27 22:33:25.211458625 +0000 UTC m=+91.861749274 (delta=73.603558ms)
	I0827 22:33:25.329174   35579 fix.go:200] guest clock delta is within tolerance: 73.603558ms
	I0827 22:33:25.329180   35579 start.go:83] releasing machines lock for "ha-158602", held for 1m31.847700276s
	I0827 22:33:25.329200   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.329450   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:33:25.331808   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.332198   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.332217   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.332340   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.332860   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.333072   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.333174   35579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:33:25.333213   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.333308   35579 ssh_runner.go:195] Run: cat /version.json
	I0827 22:33:25.333332   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.335898   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336300   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.336328   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336347   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336478   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.336670   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.336840   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.336965   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.336993   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.337040   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.337152   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.337309   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.337471   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.337593   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.451717   35579 ssh_runner.go:195] Run: systemctl --version
	I0827 22:33:25.458610   35579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:33:25.622966   35579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:33:25.636565   35579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:33:25.636647   35579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:33:25.671232   35579 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 22:33:25.671262   35579 start.go:495] detecting cgroup driver to use...
	I0827 22:33:25.671345   35579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:33:25.703626   35579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:33:25.755043   35579 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:33:25.755116   35579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:33:25.806388   35579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:33:25.853623   35579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:33:26.020151   35579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:33:26.166536   35579 docker.go:233] disabling docker service ...
	I0827 22:33:26.166626   35579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:33:26.183592   35579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:33:26.198106   35579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:33:26.353712   35579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:33:26.501483   35579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:33:26.515568   35579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:33:26.534616   35579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:33:26.534671   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.545829   35579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:33:26.545903   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.556655   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.567287   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.577825   35579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:33:26.588591   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.599139   35579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.610726   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.624394   35579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:33:26.635657   35579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:33:26.646280   35579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:33:26.808632   35579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:33:27.947699   35579 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.139021259s)
	I0827 22:33:27.947734   35579 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:33:27.947791   35579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:33:27.952941   35579 start.go:563] Will wait 60s for crictl version
	I0827 22:33:27.953036   35579 ssh_runner.go:195] Run: which crictl
	I0827 22:33:27.956938   35579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:33:27.996625   35579 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:33:27.996703   35579 ssh_runner.go:195] Run: crio --version
	I0827 22:33:28.026540   35579 ssh_runner.go:195] Run: crio --version
	I0827 22:33:28.058429   35579 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:33:28.060198   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:33:28.063145   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:28.063530   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:28.063559   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:28.063766   35579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:33:28.069069   35579 kubeadm.go:883] updating cluster {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:33:28.069435   35579 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:33:28.069553   35579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:33:28.114106   35579 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:33:28.114128   35579 crio.go:433] Images already preloaded, skipping extraction
	I0827 22:33:28.114187   35579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:33:28.147557   35579 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:33:28.147578   35579 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:33:28.147588   35579 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.0 crio true true} ...
	I0827 22:33:28.147692   35579 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:33:28.147750   35579 ssh_runner.go:195] Run: crio config
	I0827 22:33:28.194138   35579 cni.go:84] Creating CNI manager for ""
	I0827 22:33:28.194156   35579 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0827 22:33:28.194168   35579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:33:28.194187   35579 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-158602 NodeName:ha-158602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:33:28.194316   35579 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-158602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:33:28.194335   35579 kube-vip.go:115] generating kube-vip config ...
	I0827 22:33:28.194377   35579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:33:28.205503   35579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:33:28.205640   35579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:33:28.205700   35579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:33:28.215990   35579 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:33:28.216060   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0827 22:33:28.225708   35579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0827 22:33:28.242548   35579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:33:28.259662   35579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0827 22:33:28.276918   35579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:33:28.294236   35579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:33:28.299207   35579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:33:28.443447   35579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:33:28.457919   35579 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.77
	I0827 22:33:28.457943   35579 certs.go:194] generating shared ca certs ...
	I0827 22:33:28.457965   35579 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.458130   35579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:33:28.458193   35579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:33:28.458208   35579 certs.go:256] generating profile certs ...
	I0827 22:33:28.458301   35579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:33:28.458341   35579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0
	I0827 22:33:28.458366   35579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.91 192.168.39.254]
	I0827 22:33:28.752229   35579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 ...
	I0827 22:33:28.752262   35579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0: {Name:mkb9a41cd484507a2d5b50d3d0ae9a5258be4714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.752483   35579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0 ...
	I0827 22:33:28.752505   35579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0: {Name:mk8b483437cb4eaaa3018654e91bba6c4e419fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.752606   35579 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:33:28.752806   35579 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:33:28.752971   35579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:33:28.752988   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:33:28.753006   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:33:28.753027   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:33:28.753049   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:33:28.753068   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:33:28.753087   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:33:28.753107   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:33:28.753127   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:33:28.753196   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:33:28.753237   35579 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:33:28.753250   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:33:28.753283   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:33:28.753318   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:33:28.753352   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:33:28.753427   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:33:28.753473   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:33:28.753494   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:33:28.753507   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:28.754186   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:33:28.851668   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:33:29.079218   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:33:29.212882   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:33:29.326332   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0827 22:33:29.479144   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 22:33:29.725790   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:33:29.860168   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:33:29.914171   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:33:29.961924   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:33:29.995385   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:33:30.036866   35579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:33:30.062379   35579 ssh_runner.go:195] Run: openssl version
	I0827 22:33:30.090404   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:33:30.115019   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.126097   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.126181   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.139487   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:33:30.159855   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:33:30.179089   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.185656   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.185724   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.191400   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:33:30.202883   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:33:30.217105   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.221979   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.222031   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.229820   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:33:30.242132   35579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:33:30.248850   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 22:33:30.256036   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 22:33:30.264575   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 22:33:30.272653   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 22:33:30.279388   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 22:33:30.285460   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 22:33:30.291392   35579 kubeadm.go:392] StartCluster: {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:33:30.291527   35579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:33:30.291605   35579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:33:30.352539   35579 cri.go:89] found id: "fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c"
	I0827 22:33:30.352563   35579 cri.go:89] found id: "80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8"
	I0827 22:33:30.352567   35579 cri.go:89] found id: "d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468"
	I0827 22:33:30.352570   35579 cri.go:89] found id: "cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae"
	I0827 22:33:30.352573   35579 cri.go:89] found id: "88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b"
	I0827 22:33:30.352576   35579 cri.go:89] found id: "5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0"
	I0827 22:33:30.352579   35579 cri.go:89] found id: "9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e"
	I0827 22:33:30.352585   35579 cri.go:89] found id: "6d81ed0028836c65f03d647548e3e5428c3a7c3ea78c602e8859da81460f5be7"
	I0827 22:33:30.352588   35579 cri.go:89] found id: "bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25"
	I0827 22:33:30.352593   35579 cri.go:89] found id: "7d7040ed93da7173a21ab0833477864db295fa399704456dbcf15e700138abf0"
	I0827 22:33:30.352595   35579 cri.go:89] found id: "70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d"
	I0827 22:33:30.352598   35579 cri.go:89] found id: "c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3"
	I0827 22:33:30.352609   35579 cri.go:89] found id: "9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03"
	I0827 22:33:30.352612   35579 cri.go:89] found id: "79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730"
	I0827 22:33:30.352616   35579 cri.go:89] found id: "a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f"
	I0827 22:33:30.352619   35579 cri.go:89] found id: "eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff"
	I0827 22:33:30.352621   35579 cri.go:89] found id: "961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280"
	I0827 22:33:30.352625   35579 cri.go:89] found id: "ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d"
	I0827 22:33:30.352628   35579 cri.go:89] found id: "60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f"
	I0827 22:33:30.352630   35579 cri.go:89] found id: ""
	I0827 22:33:30.352680   35579 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.742161491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798138742124839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=830048ce-69ba-4a1f-b794-2d9eb210930a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.742903643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd47c6b1-072f-4f99-aa03-69931c4eb2ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.742993408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd47c6b1-072f-4f99-aa03-69931c4eb2ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.743610087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd47c6b1-072f-4f99-aa03-69931c4eb2ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.785695383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df5083f5-931f-47ac-ad83-0c3741e2888d name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.785774961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df5083f5-931f-47ac-ad83-0c3741e2888d name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.787168723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c27be4e9-dd39-4a55-ba17-0e9e5da5460e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.787716803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798138787682766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c27be4e9-dd39-4a55-ba17-0e9e5da5460e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.788235309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df980676-9112-492b-a260-d201497831b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.788298487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df980676-9112-492b-a260-d201497831b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.788739694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df980676-9112-492b-a260-d201497831b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.829393390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a472cad3-bf44-4b6d-be3e-536df875c104 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.829598248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a472cad3-bf44-4b6d-be3e-536df875c104 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.831117408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8afa154-c74c-45cb-967f-7504637a63e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.831754687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798138831726998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8afa154-c74c-45cb-967f-7504637a63e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.832547858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3356ca1-5738-4ea4-a534-526df2046939 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.832652530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3356ca1-5738-4ea4-a534-526df2046939 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.833236989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3356ca1-5738-4ea4-a534-526df2046939 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.879234562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=323f4f51-8bf3-456d-a042-0e21b9e50e76 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.879307854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=323f4f51-8bf3-456d-a042-0e21b9e50e76 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.881327811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d454e4f6-50cc-458f-9b66-8433f9acb831 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.881909586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798138881873188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d454e4f6-50cc-458f-9b66-8433f9acb831 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.885820597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f95a98a6-4c24-4ac5-bf32-07731047f284 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.885895360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f95a98a6-4c24-4ac5-bf32-07731047f284 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:35:38 ha-158602 crio[3814]: time="2024-08-27 22:35:38.886297141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f95a98a6-4c24-4ac5-bf32-07731047f284 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	601495ed11e17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   12c1f8c9dbd6e       storage-provisioner
	eb20f3f202c26       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   f13a0a1ea9db9       busybox-7dff88458-gxvsc
	fb475e1179e65       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   89df003e83b42       kube-controller-manager-ha-158602
	a5735ac34d1ff       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   59aa70c5f3481       kube-apiserver-ha-158602
	9caa2a6fe79af       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   d90d7ab1f41bd       kube-vip-ha-158602
	f4bc6a8b4d535       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   e5bfc7f83bb70       kube-proxy-5pmrv
	fe2fe55492557       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   65041c74f2cb3       coredns-6f6b679f8f-jxzgs
	80d6fdca5fb24       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   a285158c423b0       kindnet-kb84t
	d6671aef22454       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   8b6b964e2cbb6       coredns-6f6b679f8f-x6dcd
	cc8a19b5a2e06       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   9d5793d869740       kube-scheduler-ha-158602
	88d8ca73b340f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   89df003e83b42       kube-controller-manager-ha-158602
	5d29b152972a1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   19e1d6606b8c7       etcd-ha-158602
	9de12fe017aa2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   59aa70c5f3481       kube-apiserver-ha-158602
	bb94f6a77a8a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago        Exited              storage-provisioner       4                   ffbe4fc48196e       storage-provisioner
	6577993a571ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4f329cad0ee8c       busybox-7dff88458-gxvsc
	70a0959d7fc34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   922e19e19e6b3       coredns-6f6b679f8f-x6dcd
	c1556743f3ed7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   7e95e9aaf3336       coredns-6f6b679f8f-jxzgs
	9006fd58dfc63       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    12 minutes ago       Exited              kindnet-cni               0                   d113f6cede364       kindnet-kb84t
	79ea4c0053fb1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      12 minutes ago       Exited              kube-proxy                0                   240775e6cca6c       kube-proxy-5pmrv
	eb6e08e1cf880       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   71d74ecb9f300       etcd-ha-158602
	60feae8b5d1f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      12 minutes ago       Exited              kube-scheduler            0                   5e03fa37bf662       kube-scheduler-ha-158602
	
	
	==> coredns [70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d] <==
	[INFO] 10.244.0.4:43032 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001634431s
	[INFO] 10.244.0.4:57056 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135477s
	[INFO] 10.244.0.4:60425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128187s
	[INFO] 10.244.0.4:33910 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092983s
	[INFO] 10.244.2.2:55029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617414s
	[INFO] 10.244.2.2:43643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085283s
	[INFO] 10.244.2.2:33596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116719s
	[INFO] 10.244.1.2:36406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011994s
	[INFO] 10.244.1.2:45944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072161s
	[INFO] 10.244.0.4:34595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083932s
	[INFO] 10.244.0.4:56369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051489s
	[INFO] 10.244.0.4:45069 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052963s
	[INFO] 10.244.2.2:41980 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118063s
	[INFO] 10.244.1.2:35610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170436s
	[INFO] 10.244.1.2:39033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193301s
	[INFO] 10.244.1.2:58078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123451s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128271s
	[INFO] 10.244.0.4:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010237s
	[INFO] 10.244.0.4:58359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080338s
	[INFO] 10.244.2.2:35482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009539s
	[INFO] 10.244.2.2:45798 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087557s
	[INFO] 10.244.2.2:39340 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090317s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1849&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3] <==
	[INFO] 10.244.1.2:34885 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226601s
	[INFO] 10.244.1.2:54874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014649s
	[INFO] 10.244.1.2:34031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187993s
	[INFO] 10.244.1.2:39560 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019907s
	[INFO] 10.244.0.4:43688 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012926s
	[INFO] 10.244.0.4:51548 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001519492s
	[INFO] 10.244.0.4:58561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052435s
	[INFO] 10.244.2.2:48091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180149s
	[INFO] 10.244.2.2:45077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104198s
	[INFO] 10.244.2.2:41789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215182s
	[INFO] 10.244.2.2:52731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064319s
	[INFO] 10.244.2.2:43957 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126173s
	[INFO] 10.244.1.2:55420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084801s
	[INFO] 10.244.1.2:45306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059642s
	[INFO] 10.244.0.4:46103 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117802s
	[INFO] 10.244.2.2:39675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191879s
	[INFO] 10.244.2.2:43022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100522s
	[INFO] 10.244.2.2:53360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093376s
	[INFO] 10.244.0.4:36426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132899s
	[INFO] 10.244.0.4:42082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000167434s
	[INFO] 10.244.2.2:36926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139785s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1791&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1785&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-158602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:22:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:35:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-158602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f393f25de7274e45b62eb7b988ece32c
	  System UUID:                f393f25d-e727-4e45-b62e-b7b988ece32c
	  Boot ID:                    a1b3c582-a6fa-4ddf-91a6-fe921f43a40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxvsc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-jxzgs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-6f6b679f8f-x6dcd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-158602                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-kb84t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-158602             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-158602    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5pmrv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-158602             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-158602                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 86s                   kube-proxy       
	  Normal   Starting                 12m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                   kubelet          Node ha-158602 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                   kubelet          Node ha-158602 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                   kubelet          Node ha-158602 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   NodeReady                12m                   kubelet          Node ha-158602 status is now: NodeReady
	  Normal   RegisteredNode           11m                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           10m                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Warning  ContainerGCFailed        2m54s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m18s (x3 over 3m7s)  kubelet          Node ha-158602 status is now: NodeNotReady
	  Normal   RegisteredNode           90s                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           86s                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           32s                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	
	
	Name:               ha-158602-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:23:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:35:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    ha-158602-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b63e2f54de44a9e8ad7eb0ee8626bfb
	  System UUID:                1b63e2f5-4de4-4a9e-8ad7-eb0ee8626bfb
	  Boot ID:                    28954d1d-3c7c-4000-b674-990248834daf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-crtgh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-158602-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-zmc6v                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-158602-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-158602-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-slgmm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-158602-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-158602-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  NodeNotReady             8m26s                node-controller  Node ha-158602-m02 status is now: NodeNotReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           86s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           32s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	
	
	Name:               ha-158602-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_24_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:35:19 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:35:19 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:35:19 +0000   Tue, 27 Aug 2024 22:24:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:35:19 +0000   Tue, 27 Aug 2024 22:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.91
	  Hostname:    ha-158602-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d02faebd780a49dd8e6ae91df2852b5e
	  System UUID:                d02faebd-780a-49dd-8e6a-e91df2852b5e
	  Boot ID:                    c14b1b5f-b8b8-4089-be05-dd56deff031e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hmcwr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-158602-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-9wgcl                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-158602-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-158602-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-nhjgk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-158602-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-158602-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 32s                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-158602-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-158602-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal   RegisteredNode           90s                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal   RegisteredNode           86s                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	  Normal   Starting                 51s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  51s                kubelet          Node ha-158602-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s                kubelet          Node ha-158602-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s                kubelet          Node ha-158602-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 51s                kubelet          Node ha-158602-m03 has been rebooted, boot id: c14b1b5f-b8b8-4089-be05-dd56deff031e
	  Normal   RegisteredNode           32s                node-controller  Node ha-158602-m03 event: Registered Node ha-158602-m03 in Controller
	
	
	Name:               ha-158602-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:25:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:35:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:35:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:35:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-158602-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad10535aaed444b79090a76efa3929c7
	  System UUID:                ad10535a-aed4-44b7-9090-a76efa3929c7
	  Boot ID:                    7e6d87ac-956c-465c-9c5a-34c53f3cdbb7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c6szl       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m42s
	  kube-system                 kube-proxy-658sj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m42s (x2 over 9m42s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m42s (x2 over 9m42s)  kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m42s (x2 over 9m42s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m41s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           9m40s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   NodeReady                9m21s                  kubelet          Node ha-158602-m04 status is now: NodeReady
	  Normal   RegisteredNode           90s                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           86s                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   NodeNotReady             50s                    node-controller  Node ha-158602-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                     kubelet          Node ha-158602-m04 has been rebooted, boot id: 7e6d87ac-956c-465c-9c5a-34c53f3cdbb7
	  Normal   NodeReady                8s                     kubelet          Node ha-158602-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053782] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198923] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125102] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.284457] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.718918] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.171591] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.060183] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.161491] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.086175] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.529529] kauditd_printk_skb: 21 callbacks suppressed
	[Aug27 22:23] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.211142] kauditd_printk_skb: 26 callbacks suppressed
	[Aug27 22:30] kauditd_printk_skb: 1 callbacks suppressed
	[Aug27 22:33] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.147661] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[  +0.157117] systemd-fstab-generator[3753]: Ignoring "noauto" option for root device
	[  +0.174500] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.148291] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.313486] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +1.634470] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[  +2.520646] kauditd_printk_skb: 227 callbacks suppressed
	[ +23.556037] kauditd_printk_skb: 5 callbacks suppressed
	[Aug27 22:34] kauditd_printk_skb: 2 callbacks suppressed
	[Aug27 22:35] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0] <==
	{"level":"warn","ts":"2024-08-27T22:34:42.668912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:34:42.684089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"226361457cf4c252","from":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-27T22:34:44.466172Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.91:2380/version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:44.466284Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:45.037600Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:45.040857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:48.468423Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.91:2380/version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:48.468577Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:50.038585Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:50.041775Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:52.469893Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.91:2380/version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:52.469954Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:55.039292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:55.042604Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:56.471555Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.91:2380/version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:34:56.471612Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94fcd24071fd3def","error":"Get \"https://192.168.39.91:2380/version\": dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-27T22:34:58.607647Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:34:58.607710Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:34:58.631801Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"226361457cf4c252","to":"94fcd24071fd3def","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-27T22:34:58.631853Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:34:58.632092Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"226361457cf4c252","to":"94fcd24071fd3def","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-27T22:34:58.632126Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:34:58.659935Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:00.040360Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:35:00.043751Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	
	
	==> etcd [eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff] <==
	{"level":"warn","ts":"2024-08-27T22:31:54.561474Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"7.811127ms","error":"dial tcp 192.168.39.91:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-27T22:31:54.561509Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"875.28µs","error":"dial tcp 192.168.39.91:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-27T22:31:54.626265Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:31:54.626318Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T22:31:54.626395Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"226361457cf4c252","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-27T22:31:54.626614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626634Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626655Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626751Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626815Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626884Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626928Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626953Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627002Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627072Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627169Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627222Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627282Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627326Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.630889Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"warn","ts":"2024-08-27T22:31:54.630923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.134611451s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-27T22:31:54.631041Z","caller":"traceutil/trace.go:171","msg":"trace[1544599587] range","detail":"{range_begin:; range_end:; }","duration":"9.13474934s","start":"2024-08-27T22:31:45.496280Z","end":"2024-08-27T22:31:54.631030Z","steps":["trace[1544599587] 'agreement among raft nodes before linearized reading'  (duration: 9.134608012s)"],"step_count":1}
	{"level":"error","ts":"2024-08-27T22:31:54.631108Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-27T22:31:54.631154Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-08-27T22:31:54.631322Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-158602","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"]}
	
	
	==> kernel <==
	 22:35:39 up 13 min,  0 users,  load average: 0.53, 0.63, 0.35
	Linux ha-158602 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8] <==
	I0827 22:35:00.792165       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:35:10.790250       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:35:10.790500       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:35:10.790697       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:35:10.790726       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:35:10.790796       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:35:10.790816       1 main.go:299] handling current node
	I0827 22:35:10.790841       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:35:10.790857       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:35:20.790873       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:35:20.791102       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:35:20.791561       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:35:20.791638       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:35:20.791745       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:35:20.791803       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:35:20.791925       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:35:20.791962       1 main.go:299] handling current node
	I0827 22:35:30.789616       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:35:30.789759       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:35:30.789931       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:35:30.789958       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:35:30.790035       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:35:30.790054       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:35:30.790132       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:35:30.790152       1 main.go:299] handling current node
	
	
	==> kindnet [9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03] <==
	I0827 22:31:15.263572       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:25.271183       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:25.271236       1 main.go:299] handling current node
	I0827 22:31:25.271260       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:25.271266       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:25.271392       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:25.271409       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:25.271530       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:25.271551       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:35.262560       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:35.262707       1 main.go:299] handling current node
	I0827 22:31:35.262736       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:35.262756       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:35.262940       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:35.262968       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:35.263040       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:35.263061       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:45.270624       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:45.270698       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:45.270914       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:45.270938       1 main.go:299] handling current node
	I0827 22:31:45.270970       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:45.270976       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:45.271058       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:45.271079       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e] <==
	I0827 22:33:29.571884       1 options.go:228] external host was not specified, using 192.168.39.77
	I0827 22:33:29.574178       1 server.go:142] Version: v1.31.0
	I0827 22:33:29.574237       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0827 22:33:29.575120       1 run.go:72] "command failed" err="tls: private key does not match public key"
	
	
	==> kube-apiserver [a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461] <==
	I0827 22:34:05.720189       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 22:34:05.720831       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 22:34:05.809006       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 22:34:05.809032       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 22:34:05.809143       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 22:34:05.809599       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 22:34:05.809612       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0827 22:34:05.809896       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 22:34:05.810349       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 22:34:05.810488       1 aggregator.go:171] initial CRD sync complete...
	I0827 22:34:05.810526       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 22:34:05.810549       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 22:34:05.810571       1 cache.go:39] Caches are synced for autoregister controller
	I0827 22:34:05.810727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 22:34:05.815210       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 22:34:05.822058       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0827 22:34:05.824998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.91]
	I0827 22:34:05.827259       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 22:34:05.827386       1 policy_source.go:224] refreshing policies
	I0827 22:34:05.827855       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:34:05.839640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0827 22:34:05.847735       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0827 22:34:05.908115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 22:34:06.720577       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0827 22:34:07.165422       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77 192.168.39.91]
	
	
	==> kube-controller-manager [88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b] <==
	I0827 22:33:30.326127       1 serving.go:386] Generated self-signed cert in-memory
	I0827 22:33:30.740538       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0827 22:33:30.740628       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:33:30.742489       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 22:33:30.742680       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 22:33:30.743151       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0827 22:33:30.743240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0827 22:33:40.746423       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.77:8443/healthz\": dial tcp 192.168.39.77:8443: connect: connection refused"
	
	
	==> kube-controller-manager [fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43] <==
	I0827 22:34:24.925297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="92.979µs"
	I0827 22:34:26.008050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="18.003826ms"
	I0827 22:34:26.008408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="162.117µs"
	I0827 22:34:26.047118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="25.430982ms"
	I0827 22:34:26.048984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="174.197µs"
	I0827 22:34:26.121981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.625313ms"
	I0827 22:34:26.122171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="151.063µs"
	I0827 22:34:31.239246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:34:48.771805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m03"
	I0827 22:34:49.454922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:34:49.473216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:34:49.598830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.411449ms"
	I0827 22:34:49.598949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.301µs"
	I0827 22:34:53.458233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:34:54.599752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:35:01.775334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m02"
	I0827 22:35:04.845364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.422859ms"
	I0827 22:35:04.845604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="179.646µs"
	I0827 22:35:07.060100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:35:07.157122       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:35:19.290236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m03"
	I0827 22:35:31.485379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-158602-m04"
	I0827 22:35:31.486341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:35:31.502158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:35:32.077862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	
	
	==> kube-proxy [79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730] <==
	E0827 22:30:43.356287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:43.356676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:43.357108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:43.356822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:43.357209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:50.523883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0827 22:30:50.524979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:50.525102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:50.524763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:50.525161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0827 22:30:50.525043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.812648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.812910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.813189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.813303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.813887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.813951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:24.316914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:24.317749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:27.389576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:27.390372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:27.390666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:27.390741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:51.964727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:51.964897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:33:33.340920       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:36.412679       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:39.485096       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:45.628024       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:54.844710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0827 22:34:13.293020       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0827 22:34:13.293128       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:34:13.327281       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:34:13.327327       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:34:13.327356       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:34:13.329746       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:34:13.330126       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:34:13.330173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:34:13.331640       1 config.go:197] "Starting service config controller"
	I0827 22:34:13.331705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:34:13.331747       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:34:13.331777       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:34:13.341506       1 config.go:326] "Starting node config controller"
	I0827 22:34:13.341518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:34:13.432521       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:34:13.432533       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:34:13.441589       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f] <==
	E0827 22:22:43.769600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.826851       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:22:43.828024       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:22:46.441310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:25:57.773909       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.774761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-658sj"
	I0827 22:25:57.775154       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.831035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:25:57.831164       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f48452c-8a4b-403b-9da9-90f2dab5ec70(kube-system/kube-proxy-d6zj9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d6zj9"
	E0827 22:25:57.831230       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-d6zj9"
	I0827 22:25:57.831281       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:31:45.963326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:47.585165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0827 22:31:47.598816       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0827 22:31:49.310966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0827 22:31:49.394751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0827 22:31:49.884082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:50.052341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:50.176477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0827 22:31:50.298800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0827 22:31:50.357582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0827 22:31:50.682040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0827 22:31:51.093515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0827 22:31:53.076340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0827 22:31:54.328325       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae] <==
	W0827 22:33:48.694058       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:48.694130       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:48.942216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.77:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:48.942267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.77:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:49.267259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:49.267389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:49.398410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:49.398613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:50.288430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.77:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:50.288584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:50.385924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:50.385988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:51.689168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:51.689288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:34:05.745876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 22:34:05.745929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 22:34:05.746146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:34:05.746245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:34:05.746340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.751966       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:34:05.752010       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:34:54.584690       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 22:34:25 ha-158602 kubelet[1308]: E0827 22:34:25.510615    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798065510281454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:25 ha-158602 kubelet[1308]: E0827 22:34:25.510655    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798065510281454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:26 ha-158602 kubelet[1308]: I0827 22:34:26.564574    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-gxvsc" podStartSLOduration=543.220959953 podStartE2EDuration="9m5.564547281s" podCreationTimestamp="2024-08-27 22:25:21 +0000 UTC" firstStartedPulling="2024-08-27 22:25:22.303078506 +0000 UTC m=+157.148578041" lastFinishedPulling="2024-08-27 22:25:24.646665829 +0000 UTC m=+159.492165369" observedRunningTime="2024-08-27 22:25:24.996005685 +0000 UTC m=+159.841505242" watchObservedRunningTime="2024-08-27 22:34:26.564547281 +0000 UTC m=+701.410046835"
	Aug 27 22:34:35 ha-158602 kubelet[1308]: E0827 22:34:35.513403    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798075512936339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:35 ha-158602 kubelet[1308]: E0827 22:34:35.513860    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798075512936339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:45 ha-158602 kubelet[1308]: E0827 22:34:45.362977    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:34:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:34:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:34:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:34:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:34:45 ha-158602 kubelet[1308]: E0827 22:34:45.516187    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798085515822060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:45 ha-158602 kubelet[1308]: E0827 22:34:45.516220    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798085515822060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:55 ha-158602 kubelet[1308]: E0827 22:34:55.518358    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798095517821155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:34:55 ha-158602 kubelet[1308]: E0827 22:34:55.518808    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798095517821155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:05 ha-158602 kubelet[1308]: E0827 22:35:05.521562    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798105521102714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:05 ha-158602 kubelet[1308]: E0827 22:35:05.521600    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798105521102714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:07 ha-158602 kubelet[1308]: I0827 22:35:07.333797    1308 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-158602" podUID="4b2cc362-5e90-4074-a14f-aa3f96f0b5c4"
	Aug 27 22:35:07 ha-158602 kubelet[1308]: I0827 22:35:07.361382    1308 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-158602"
	Aug 27 22:35:08 ha-158602 kubelet[1308]: I0827 22:35:08.102520    1308 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-158602" podUID="4b2cc362-5e90-4074-a14f-aa3f96f0b5c4"
	Aug 27 22:35:15 ha-158602 kubelet[1308]: E0827 22:35:15.524596    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798115523299907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:15 ha-158602 kubelet[1308]: E0827 22:35:15.524640    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798115523299907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:25 ha-158602 kubelet[1308]: E0827 22:35:25.527090    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798125526314448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:25 ha-158602 kubelet[1308]: E0827 22:35:25.527622    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798125526314448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:35 ha-158602 kubelet[1308]: E0827 22:35:35.529895    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798135529283073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:35:35 ha-158602 kubelet[1308]: E0827 22:35:35.529978    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798135529283073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 22:35:38.454130   36932 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19522-7571/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-158602 -n ha-158602
helpers_test.go:261: (dbg) Run:  kubectl --context ha-158602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (348.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 stop -v=7 --alsologtostderr
E0827 22:36:21.248727   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 stop -v=7 --alsologtostderr: exit status 82 (2m0.461471014s)

                                                
                                                
-- stdout --
	* Stopping node "ha-158602-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:35:57.586817   37341 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:35:57.586939   37341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:35:57.586949   37341 out.go:358] Setting ErrFile to fd 2...
	I0827 22:35:57.586956   37341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:35:57.587155   37341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:35:57.587404   37341 out.go:352] Setting JSON to false
	I0827 22:35:57.587492   37341 mustload.go:65] Loading cluster: ha-158602
	I0827 22:35:57.587825   37341 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:35:57.587921   37341 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:35:57.588103   37341 mustload.go:65] Loading cluster: ha-158602
	I0827 22:35:57.588248   37341 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:35:57.588286   37341 stop.go:39] StopHost: ha-158602-m04
	I0827 22:35:57.588759   37341 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:35:57.588808   37341 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:35:57.603179   37341 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0827 22:35:57.603681   37341 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:35:57.604275   37341 main.go:141] libmachine: Using API Version  1
	I0827 22:35:57.604301   37341 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:35:57.604818   37341 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:35:57.607422   37341 out.go:177] * Stopping node "ha-158602-m04"  ...
	I0827 22:35:57.609205   37341 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0827 22:35:57.609242   37341 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:35:57.609477   37341 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0827 22:35:57.609500   37341 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:35:57.612407   37341 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:35:57.612935   37341 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:35:26 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:35:57.612969   37341 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:35:57.613155   37341 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:35:57.613339   37341 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:35:57.613501   37341 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:35:57.613665   37341 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	I0827 22:35:57.698712   37341 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0827 22:35:57.751384   37341 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0827 22:35:57.802891   37341 main.go:141] libmachine: Stopping "ha-158602-m04"...
	I0827 22:35:57.802941   37341 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:35:57.804562   37341 main.go:141] libmachine: (ha-158602-m04) Calling .Stop
	I0827 22:35:57.808214   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 0/120
	I0827 22:35:58.809956   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 1/120
	I0827 22:35:59.811249   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 2/120
	I0827 22:36:00.812616   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 3/120
	I0827 22:36:01.814103   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 4/120
	I0827 22:36:02.815954   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 5/120
	I0827 22:36:03.817281   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 6/120
	I0827 22:36:04.818775   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 7/120
	I0827 22:36:05.819935   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 8/120
	I0827 22:36:06.821407   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 9/120
	I0827 22:36:07.823032   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 10/120
	I0827 22:36:08.824588   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 11/120
	I0827 22:36:09.825863   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 12/120
	I0827 22:36:10.827311   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 13/120
	I0827 22:36:11.828739   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 14/120
	I0827 22:36:12.830668   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 15/120
	I0827 22:36:13.832055   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 16/120
	I0827 22:36:14.833432   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 17/120
	I0827 22:36:15.834863   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 18/120
	I0827 22:36:16.836227   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 19/120
	I0827 22:36:17.837882   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 20/120
	I0827 22:36:18.839391   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 21/120
	I0827 22:36:19.840813   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 22/120
	I0827 22:36:20.842855   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 23/120
	I0827 22:36:21.844215   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 24/120
	I0827 22:36:22.846577   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 25/120
	I0827 22:36:23.848144   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 26/120
	I0827 22:36:24.849685   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 27/120
	I0827 22:36:25.851847   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 28/120
	I0827 22:36:26.853064   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 29/120
	I0827 22:36:27.855274   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 30/120
	I0827 22:36:28.856624   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 31/120
	I0827 22:36:29.858902   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 32/120
	I0827 22:36:30.860200   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 33/120
	I0827 22:36:31.861396   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 34/120
	I0827 22:36:32.863325   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 35/120
	I0827 22:36:33.865061   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 36/120
	I0827 22:36:34.866809   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 37/120
	I0827 22:36:35.868054   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 38/120
	I0827 22:36:36.869341   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 39/120
	I0827 22:36:37.871359   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 40/120
	I0827 22:36:38.872606   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 41/120
	I0827 22:36:39.874797   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 42/120
	I0827 22:36:40.876146   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 43/120
	I0827 22:36:41.877587   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 44/120
	I0827 22:36:42.879431   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 45/120
	I0827 22:36:43.880908   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 46/120
	I0827 22:36:44.882216   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 47/120
	I0827 22:36:45.883531   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 48/120
	I0827 22:36:46.884867   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 49/120
	I0827 22:36:47.886651   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 50/120
	I0827 22:36:48.888210   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 51/120
	I0827 22:36:49.889520   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 52/120
	I0827 22:36:50.890911   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 53/120
	I0827 22:36:51.892104   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 54/120
	I0827 22:36:52.893893   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 55/120
	I0827 22:36:53.896127   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 56/120
	I0827 22:36:54.897285   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 57/120
	I0827 22:36:55.899062   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 58/120
	I0827 22:36:56.900543   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 59/120
	I0827 22:36:57.902512   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 60/120
	I0827 22:36:58.903918   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 61/120
	I0827 22:36:59.905515   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 62/120
	I0827 22:37:00.906857   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 63/120
	I0827 22:37:01.908034   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 64/120
	I0827 22:37:02.909763   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 65/120
	I0827 22:37:03.911745   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 66/120
	I0827 22:37:04.913434   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 67/120
	I0827 22:37:05.914975   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 68/120
	I0827 22:37:06.916365   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 69/120
	I0827 22:37:07.917846   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 70/120
	I0827 22:37:08.919428   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 71/120
	I0827 22:37:09.921182   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 72/120
	I0827 22:37:10.922638   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 73/120
	I0827 22:37:11.924010   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 74/120
	I0827 22:37:12.926098   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 75/120
	I0827 22:37:13.927641   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 76/120
	I0827 22:37:14.929710   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 77/120
	I0827 22:37:15.931223   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 78/120
	I0827 22:37:16.932529   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 79/120
	I0827 22:37:17.934734   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 80/120
	I0827 22:37:18.936013   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 81/120
	I0827 22:37:19.937450   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 82/120
	I0827 22:37:20.938922   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 83/120
	I0827 22:37:21.940069   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 84/120
	I0827 22:37:22.941911   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 85/120
	I0827 22:37:23.944003   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 86/120
	I0827 22:37:24.945339   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 87/120
	I0827 22:37:25.946986   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 88/120
	I0827 22:37:26.948296   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 89/120
	I0827 22:37:27.950317   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 90/120
	I0827 22:37:28.951734   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 91/120
	I0827 22:37:29.953119   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 92/120
	I0827 22:37:30.954465   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 93/120
	I0827 22:37:31.955851   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 94/120
	I0827 22:37:32.957614   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 95/120
	I0827 22:37:33.959100   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 96/120
	I0827 22:37:34.960528   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 97/120
	I0827 22:37:35.962357   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 98/120
	I0827 22:37:36.964099   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 99/120
	I0827 22:37:37.966246   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 100/120
	I0827 22:37:38.967517   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 101/120
	I0827 22:37:39.968806   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 102/120
	I0827 22:37:40.970929   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 103/120
	I0827 22:37:41.972374   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 104/120
	I0827 22:37:42.974241   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 105/120
	I0827 22:37:43.975618   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 106/120
	I0827 22:37:44.977000   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 107/120
	I0827 22:37:45.979084   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 108/120
	I0827 22:37:46.980538   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 109/120
	I0827 22:37:47.982710   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 110/120
	I0827 22:37:48.984103   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 111/120
	I0827 22:37:49.985624   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 112/120
	I0827 22:37:50.987007   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 113/120
	I0827 22:37:51.989005   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 114/120
	I0827 22:37:52.990906   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 115/120
	I0827 22:37:53.992293   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 116/120
	I0827 22:37:54.993650   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 117/120
	I0827 22:37:55.994949   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 118/120
	I0827 22:37:56.996623   37341 main.go:141] libmachine: (ha-158602-m04) Waiting for machine to stop 119/120
	I0827 22:37:57.997427   37341 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0827 22:37:57.997492   37341 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0827 22:37:57.999473   37341 out.go:201] 
	W0827 22:37:58.000964   37341 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0827 22:37:58.000998   37341 out.go:270] * 
	* 
	W0827 22:37:58.003590   37341 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 22:37:58.005255   37341 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-158602 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr: exit status 3 (19.053509451s)

                                                
                                                
-- stdout --
	ha-158602
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-158602-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:37:58.054643   37778 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:37:58.054915   37778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:37:58.054925   37778 out.go:358] Setting ErrFile to fd 2...
	I0827 22:37:58.054929   37778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:37:58.055141   37778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:37:58.055462   37778 out.go:352] Setting JSON to false
	I0827 22:37:58.055501   37778 mustload.go:65] Loading cluster: ha-158602
	I0827 22:37:58.055543   37778 notify.go:220] Checking for updates...
	I0827 22:37:58.055927   37778 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:37:58.055943   37778 status.go:255] checking status of ha-158602 ...
	I0827 22:37:58.056371   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.056444   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.079755   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0827 22:37:58.080308   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.080926   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.080962   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.081347   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.081597   37778 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:37:58.083171   37778 status.go:330] ha-158602 host status = "Running" (err=<nil>)
	I0827 22:37:58.083190   37778 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:37:58.083593   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.083637   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.099300   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39119
	I0827 22:37:58.099758   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.100229   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.100250   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.100613   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.100798   37778 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:37:58.103636   37778 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:37:58.104090   37778 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:37:58.104117   37778 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:37:58.104290   37778 host.go:66] Checking if "ha-158602" exists ...
	I0827 22:37:58.104655   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.104693   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.121325   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I0827 22:37:58.121767   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.122336   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.122374   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.122684   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.122869   37778 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:37:58.123048   37778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:37:58.123077   37778 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:37:58.125990   37778 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:37:58.126405   37778 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:37:58.126437   37778 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:37:58.126511   37778 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:37:58.126692   37778 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:37:58.126858   37778 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:37:58.126980   37778 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:37:58.213474   37778 ssh_runner.go:195] Run: systemctl --version
	I0827 22:37:58.220199   37778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:37:58.236145   37778 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:37:58.236176   37778 api_server.go:166] Checking apiserver status ...
	I0827 22:37:58.236209   37778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:37:58.255551   37778 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5012/cgroup
	W0827 22:37:58.264727   37778 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5012/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:37:58.264801   37778 ssh_runner.go:195] Run: ls
	I0827 22:37:58.269404   37778 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:37:58.273779   37778 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:37:58.273798   37778 status.go:422] ha-158602 apiserver status = Running (err=<nil>)
	I0827 22:37:58.273807   37778 status.go:257] ha-158602 status: &{Name:ha-158602 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:37:58.273822   37778 status.go:255] checking status of ha-158602-m02 ...
	I0827 22:37:58.274102   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.274131   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.288825   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42375
	I0827 22:37:58.289309   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.289814   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.289842   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.290171   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.290376   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetState
	I0827 22:37:58.291868   37778 status.go:330] ha-158602-m02 host status = "Running" (err=<nil>)
	I0827 22:37:58.291886   37778 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:37:58.292183   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.292237   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.307130   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I0827 22:37:58.307496   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.308012   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.308032   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.308345   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.308563   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetIP
	I0827 22:37:58.311078   37778 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:37:58.311543   37778 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:33:40 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:37:58.311570   37778 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:37:58.311683   37778 host.go:66] Checking if "ha-158602-m02" exists ...
	I0827 22:37:58.311964   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.311996   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.326781   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0827 22:37:58.327142   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.327671   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.327692   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.328001   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.328190   37778 main.go:141] libmachine: (ha-158602-m02) Calling .DriverName
	I0827 22:37:58.328332   37778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:37:58.328348   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHHostname
	I0827 22:37:58.331264   37778 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:37:58.331705   37778 main.go:141] libmachine: (ha-158602-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7e:06", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:33:40 +0000 UTC Type:0 Mac:52:54:00:fa:7e:06 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-158602-m02 Clientid:01:52:54:00:fa:7e:06}
	I0827 22:37:58.331733   37778 main.go:141] libmachine: (ha-158602-m02) DBG | domain ha-158602-m02 has defined IP address 192.168.39.142 and MAC address 52:54:00:fa:7e:06 in network mk-ha-158602
	I0827 22:37:58.331888   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHPort
	I0827 22:37:58.332085   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHKeyPath
	I0827 22:37:58.332212   37778 main.go:141] libmachine: (ha-158602-m02) Calling .GetSSHUsername
	I0827 22:37:58.332340   37778 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m02/id_rsa Username:docker}
	I0827 22:37:58.414372   37778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:37:58.432609   37778 kubeconfig.go:125] found "ha-158602" server: "https://192.168.39.254:8443"
	I0827 22:37:58.432639   37778 api_server.go:166] Checking apiserver status ...
	I0827 22:37:58.432684   37778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:37:58.447486   37778 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	W0827 22:37:58.458823   37778 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:37:58.458871   37778 ssh_runner.go:195] Run: ls
	I0827 22:37:58.463502   37778 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0827 22:37:58.468088   37778 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0827 22:37:58.468114   37778 status.go:422] ha-158602-m02 apiserver status = Running (err=<nil>)
	I0827 22:37:58.468124   37778 status.go:257] ha-158602-m02 status: &{Name:ha-158602-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:37:58.468144   37778 status.go:255] checking status of ha-158602-m04 ...
	I0827 22:37:58.468676   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.468715   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.483767   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0827 22:37:58.484234   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.484715   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.484738   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.485050   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.485286   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetState
	I0827 22:37:58.486999   37778 status.go:330] ha-158602-m04 host status = "Running" (err=<nil>)
	I0827 22:37:58.487017   37778 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:37:58.487295   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.487327   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.502774   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I0827 22:37:58.503245   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.503737   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.503758   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.504129   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.504349   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetIP
	I0827 22:37:58.507348   37778 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:37:58.507857   37778 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:35:26 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:37:58.507918   37778 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:37:58.508043   37778 host.go:66] Checking if "ha-158602-m04" exists ...
	I0827 22:37:58.508343   37778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:37:58.508375   37778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:37:58.523822   37778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0827 22:37:58.524277   37778 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:37:58.524789   37778 main.go:141] libmachine: Using API Version  1
	I0827 22:37:58.524812   37778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:37:58.525106   37778 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:37:58.525246   37778 main.go:141] libmachine: (ha-158602-m04) Calling .DriverName
	I0827 22:37:58.525421   37778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:37:58.525442   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHHostname
	I0827 22:37:58.527971   37778 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:37:58.528350   37778 main.go:141] libmachine: (ha-158602-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:d2:31", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:35:26 +0000 UTC Type:0 Mac:52:54:00:16:d2:31 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-158602-m04 Clientid:01:52:54:00:16:d2:31}
	I0827 22:37:58.528371   37778 main.go:141] libmachine: (ha-158602-m04) DBG | domain ha-158602-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:d2:31 in network mk-ha-158602
	I0827 22:37:58.528503   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHPort
	I0827 22:37:58.528683   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHKeyPath
	I0827 22:37:58.528815   37778 main.go:141] libmachine: (ha-158602-m04) Calling .GetSSHUsername
	I0827 22:37:58.528947   37778 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602-m04/id_rsa Username:docker}
	W0827 22:38:17.060701   37778 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.17:22: connect: no route to host
	W0827 22:38:17.060790   37778 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	E0827 22:38:17.060803   37778 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	I0827 22:38:17.060810   37778 status.go:257] ha-158602-m04 status: &{Name:ha-158602-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0827 22:38:17.060840   37778 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-158602 -n ha-158602
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-158602 logs -n 25: (1.743578016s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m04 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp testdata/cp-test.txt                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602:/home/docker/cp-test_ha-158602-m04_ha-158602.txt                       |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602 sudo cat                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602.txt                                 |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m02:/home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m02 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m03:/home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n                                                                 | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | ha-158602-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-158602 ssh -n ha-158602-m03 sudo cat                                          | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC | 27 Aug 24 22:26 UTC |
	|         | /home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-158602 node stop m02 -v=7                                                     | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-158602 node start m02 -v=7                                                    | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-158602 -v=7                                                           | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-158602 -v=7                                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-158602 --wait=true -v=7                                                    | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:31 UTC | 27 Aug 24 22:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-158602                                                                | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:35 UTC |                     |
	| node    | ha-158602 node delete m03 -v=7                                                   | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:35 UTC | 27 Aug 24 22:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-158602 stop -v=7                                                              | ha-158602 | jenkins | v1.33.1 | 27 Aug 24 22:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:31:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:31:53.389908   35579 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:31:53.390055   35579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:31:53.390066   35579 out.go:358] Setting ErrFile to fd 2...
	I0827 22:31:53.390072   35579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:31:53.390273   35579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:31:53.390921   35579 out.go:352] Setting JSON to false
	I0827 22:31:53.391881   35579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4460,"bootTime":1724793453,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:31:53.391940   35579 start.go:139] virtualization: kvm guest
	I0827 22:31:53.394330   35579 out.go:177] * [ha-158602] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:31:53.395832   35579 notify.go:220] Checking for updates...
	I0827 22:31:53.395860   35579 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:31:53.397279   35579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:31:53.398639   35579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:31:53.399899   35579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:31:53.401011   35579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:31:53.402200   35579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:31:53.403798   35579 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:31:53.403915   35579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:31:53.404557   35579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:31:53.404608   35579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:31:53.421650   35579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0827 22:31:53.422096   35579 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:31:53.422626   35579 main.go:141] libmachine: Using API Version  1
	I0827 22:31:53.422648   35579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:31:53.422990   35579 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:31:53.423179   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.459636   35579 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 22:31:53.461104   35579 start.go:297] selected driver: kvm2
	I0827 22:31:53.461119   35579 start.go:901] validating driver "kvm2" against &{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:31:53.461277   35579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:31:53.461600   35579 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:31:53.461696   35579 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:31:53.476675   35579 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:31:53.477305   35579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:31:53.477369   35579 cni.go:84] Creating CNI manager for ""
	I0827 22:31:53.477381   35579 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0827 22:31:53.477434   35579 start.go:340] cluster config:
	{Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:31:53.477587   35579 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:31:53.479644   35579 out.go:177] * Starting "ha-158602" primary control-plane node in "ha-158602" cluster
	I0827 22:31:53.480858   35579 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:31:53.480895   35579 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:31:53.480902   35579 cache.go:56] Caching tarball of preloaded images
	I0827 22:31:53.481032   35579 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:31:53.481056   35579 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:31:53.481230   35579 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/config.json ...
	I0827 22:31:53.481431   35579 start.go:360] acquireMachinesLock for ha-158602: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:31:53.481473   35579 start.go:364] duration metric: took 23.535µs to acquireMachinesLock for "ha-158602"
	I0827 22:31:53.481485   35579 start.go:96] Skipping create...Using existing machine configuration
	I0827 22:31:53.481493   35579 fix.go:54] fixHost starting: 
	I0827 22:31:53.481797   35579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:31:53.481826   35579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:31:53.495902   35579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0827 22:31:53.496344   35579 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:31:53.496841   35579 main.go:141] libmachine: Using API Version  1
	I0827 22:31:53.496861   35579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:31:53.497140   35579 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:31:53.497318   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.497453   35579 main.go:141] libmachine: (ha-158602) Calling .GetState
	I0827 22:31:53.499023   35579 fix.go:112] recreateIfNeeded on ha-158602: state=Running err=<nil>
	W0827 22:31:53.499062   35579 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 22:31:53.500934   35579 out.go:177] * Updating the running kvm2 "ha-158602" VM ...
	I0827 22:31:53.502345   35579 machine.go:93] provisionDockerMachine start ...
	I0827 22:31:53.502366   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:31:53.502587   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.505249   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.505724   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.505750   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.505863   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.506019   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.506167   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.506311   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.506493   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.506694   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.506706   35579 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 22:31:53.625309   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:31:53.625341   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.625620   35579 buildroot.go:166] provisioning hostname "ha-158602"
	I0827 22:31:53.625648   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.625844   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.628387   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.628808   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.628850   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.628940   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.629152   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.629317   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.629482   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.629618   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.629822   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.629842   35579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-158602 && echo "ha-158602" | sudo tee /etc/hostname
	I0827 22:31:53.760240   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-158602
	
	I0827 22:31:53.760271   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.763095   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.763637   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.763659   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.763911   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:53.764096   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.764259   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:53.764414   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:53.764563   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:53.764780   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:53.764810   35579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-158602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-158602/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-158602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:31:53.876725   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:31:53.876764   35579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:31:53.876813   35579 buildroot.go:174] setting up certificates
	I0827 22:31:53.876826   35579 provision.go:84] configureAuth start
	I0827 22:31:53.876845   35579 main.go:141] libmachine: (ha-158602) Calling .GetMachineName
	I0827 22:31:53.877109   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:31:53.879667   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.880063   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.880094   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.880260   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:53.882437   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.882836   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:53.882860   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:53.883031   35579 provision.go:143] copyHostCerts
	I0827 22:31:53.883061   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:31:53.883115   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:31:53.883130   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:31:53.883196   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:31:53.883325   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:31:53.883344   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:31:53.883348   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:31:53.883381   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:31:53.883437   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:31:53.883457   35579 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:31:53.883463   35579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:31:53.883483   35579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:31:53.883545   35579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.ha-158602 san=[127.0.0.1 192.168.39.77 ha-158602 localhost minikube]
	I0827 22:31:54.045794   35579 provision.go:177] copyRemoteCerts
	I0827 22:31:54.045857   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:31:54.045881   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:54.048533   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.048899   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:54.048924   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.049093   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:54.049316   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.049501   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:54.049651   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:31:54.136040   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:31:54.136114   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:31:54.161677   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:31:54.161749   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0827 22:31:54.185953   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:31:54.186022   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 22:31:54.209655   35579 provision.go:87] duration metric: took 332.81472ms to configureAuth
	I0827 22:31:54.209677   35579 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:31:54.209907   35579 config.go:182] Loaded profile config "ha-158602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:31:54.209982   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:31:54.212839   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.213254   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:31:54.213278   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:31:54.213502   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:31:54.213727   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.213913   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:31:54.214058   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:31:54.214313   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:31:54.214614   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:31:54.214636   35579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:33:24.984074   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:33:24.984133   35579 machine.go:96] duration metric: took 1m31.481746518s to provisionDockerMachine
	I0827 22:33:24.984148   35579 start.go:293] postStartSetup for "ha-158602" (driver="kvm2")
	I0827 22:33:24.984163   35579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:33:24.984193   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:24.984605   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:33:24.984638   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:24.988420   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:24.988972   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:24.989005   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:24.989180   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:24.989428   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:24.989561   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:24.989683   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.080256   35579 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:33:25.084326   35579 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:33:25.084355   35579 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:33:25.084434   35579 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:33:25.084571   35579 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:33:25.084584   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:33:25.084708   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:33:25.094625   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:33:25.119132   35579 start.go:296] duration metric: took 134.967081ms for postStartSetup
	I0827 22:33:25.119184   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.119511   35579 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0827 22:33:25.119576   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.122295   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.122866   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.122894   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.123167   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.123385   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.123585   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.123713   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	W0827 22:33:25.211411   35579 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0827 22:33:25.211450   35579 fix.go:56] duration metric: took 1m31.729956586s for fixHost
	I0827 22:33:25.211473   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.214397   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.214900   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.214924   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.215197   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.215416   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.215609   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.215799   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.215980   35579 main.go:141] libmachine: Using SSH client type: native
	I0827 22:33:25.216169   35579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I0827 22:33:25.216180   35579 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:33:25.329116   35579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724798005.285062183
	
	I0827 22:33:25.329135   35579 fix.go:216] guest clock: 1724798005.285062183
	I0827 22:33:25.329142   35579 fix.go:229] Guest: 2024-08-27 22:33:25.285062183 +0000 UTC Remote: 2024-08-27 22:33:25.211458625 +0000 UTC m=+91.861749274 (delta=73.603558ms)
	I0827 22:33:25.329174   35579 fix.go:200] guest clock delta is within tolerance: 73.603558ms
	I0827 22:33:25.329180   35579 start.go:83] releasing machines lock for "ha-158602", held for 1m31.847700276s
	I0827 22:33:25.329200   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.329450   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:33:25.331808   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.332198   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.332217   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.332340   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.332860   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.333072   35579 main.go:141] libmachine: (ha-158602) Calling .DriverName
	I0827 22:33:25.333174   35579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:33:25.333213   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.333308   35579 ssh_runner.go:195] Run: cat /version.json
	I0827 22:33:25.333332   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHHostname
	I0827 22:33:25.335898   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336300   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.336328   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336347   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.336478   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.336670   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.336840   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.336965   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:25.336993   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:25.337040   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.337152   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHPort
	I0827 22:33:25.337309   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHKeyPath
	I0827 22:33:25.337471   35579 main.go:141] libmachine: (ha-158602) Calling .GetSSHUsername
	I0827 22:33:25.337593   35579 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/ha-158602/id_rsa Username:docker}
	I0827 22:33:25.451717   35579 ssh_runner.go:195] Run: systemctl --version
	I0827 22:33:25.458610   35579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:33:25.622966   35579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 22:33:25.636565   35579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:33:25.636647   35579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:33:25.671232   35579 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 22:33:25.671262   35579 start.go:495] detecting cgroup driver to use...
	I0827 22:33:25.671345   35579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:33:25.703626   35579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:33:25.755043   35579 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:33:25.755116   35579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:33:25.806388   35579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:33:25.853623   35579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:33:26.020151   35579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:33:26.166536   35579 docker.go:233] disabling docker service ...
	I0827 22:33:26.166626   35579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:33:26.183592   35579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:33:26.198106   35579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:33:26.353712   35579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:33:26.501483   35579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:33:26.515568   35579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:33:26.534616   35579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:33:26.534671   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.545829   35579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:33:26.545903   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.556655   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.567287   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.577825   35579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:33:26.588591   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.599139   35579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.610726   35579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:33:26.624394   35579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:33:26.635657   35579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:33:26.646280   35579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:33:26.808632   35579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:33:27.947699   35579 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.139021259s)
	I0827 22:33:27.947734   35579 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:33:27.947791   35579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:33:27.952941   35579 start.go:563] Will wait 60s for crictl version
	I0827 22:33:27.953036   35579 ssh_runner.go:195] Run: which crictl
	I0827 22:33:27.956938   35579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:33:27.996625   35579 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:33:27.996703   35579 ssh_runner.go:195] Run: crio --version
	I0827 22:33:28.026540   35579 ssh_runner.go:195] Run: crio --version
	I0827 22:33:28.058429   35579 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:33:28.060198   35579 main.go:141] libmachine: (ha-158602) Calling .GetIP
	I0827 22:33:28.063145   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:28.063530   35579 main.go:141] libmachine: (ha-158602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:de:6a", ip: ""} in network mk-ha-158602: {Iface:virbr1 ExpiryTime:2024-08-27 23:22:19 +0000 UTC Type:0 Mac:52:54:00:25:de:6a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-158602 Clientid:01:52:54:00:25:de:6a}
	I0827 22:33:28.063559   35579 main.go:141] libmachine: (ha-158602) DBG | domain ha-158602 has defined IP address 192.168.39.77 and MAC address 52:54:00:25:de:6a in network mk-ha-158602
	I0827 22:33:28.063766   35579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:33:28.069069   35579 kubeadm.go:883] updating cluster {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:33:28.069435   35579 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:33:28.069553   35579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:33:28.114106   35579 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:33:28.114128   35579 crio.go:433] Images already preloaded, skipping extraction
	I0827 22:33:28.114187   35579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:33:28.147557   35579 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:33:28.147578   35579 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:33:28.147588   35579 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.31.0 crio true true} ...
	I0827 22:33:28.147692   35579 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-158602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:33:28.147750   35579 ssh_runner.go:195] Run: crio config
	I0827 22:33:28.194138   35579 cni.go:84] Creating CNI manager for ""
	I0827 22:33:28.194156   35579 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0827 22:33:28.194168   35579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:33:28.194187   35579 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-158602 NodeName:ha-158602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:33:28.194316   35579 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-158602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:33:28.194335   35579 kube-vip.go:115] generating kube-vip config ...
	I0827 22:33:28.194377   35579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0827 22:33:28.205503   35579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0827 22:33:28.205640   35579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0827 22:33:28.205700   35579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:33:28.215990   35579 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:33:28.216060   35579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0827 22:33:28.225708   35579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0827 22:33:28.242548   35579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:33:28.259662   35579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0827 22:33:28.276918   35579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0827 22:33:28.294236   35579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0827 22:33:28.299207   35579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:33:28.443447   35579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:33:28.457919   35579 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602 for IP: 192.168.39.77
	I0827 22:33:28.457943   35579 certs.go:194] generating shared ca certs ...
	I0827 22:33:28.457965   35579 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.458130   35579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:33:28.458193   35579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:33:28.458208   35579 certs.go:256] generating profile certs ...
	I0827 22:33:28.458301   35579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/client.key
	I0827 22:33:28.458341   35579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0
	I0827 22:33:28.458366   35579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77 192.168.39.142 192.168.39.91 192.168.39.254]
	I0827 22:33:28.752229   35579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 ...
	I0827 22:33:28.752262   35579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0: {Name:mkb9a41cd484507a2d5b50d3d0ae9a5258be4714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.752483   35579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0 ...
	I0827 22:33:28.752505   35579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0: {Name:mk8b483437cb4eaaa3018654e91bba6c4e419fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:33:28.752606   35579 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt.3db02db0 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt
	I0827 22:33:28.752806   35579 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key.3db02db0 -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key
	I0827 22:33:28.752971   35579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key
	I0827 22:33:28.752988   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:33:28.753006   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:33:28.753027   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:33:28.753049   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:33:28.753068   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:33:28.753087   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:33:28.753107   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:33:28.753127   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:33:28.753196   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:33:28.753237   35579 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:33:28.753250   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:33:28.753283   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:33:28.753318   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:33:28.753352   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:33:28.753427   35579 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:33:28.753473   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:33:28.753494   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:33:28.753507   35579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:28.754186   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:33:28.851668   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:33:29.079218   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:33:29.212882   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:33:29.326332   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0827 22:33:29.479144   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 22:33:29.725790   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:33:29.860168   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/ha-158602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0827 22:33:29.914171   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:33:29.961924   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:33:29.995385   35579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:33:30.036866   35579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:33:30.062379   35579 ssh_runner.go:195] Run: openssl version
	I0827 22:33:30.090404   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:33:30.115019   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.126097   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.126181   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:33:30.139487   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:33:30.159855   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:33:30.179089   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.185656   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.185724   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:33:30.191400   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:33:30.202883   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:33:30.217105   35579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.221979   35579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.222031   35579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:33:30.229820   35579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:33:30.242132   35579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:33:30.248850   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 22:33:30.256036   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 22:33:30.264575   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 22:33:30.272653   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 22:33:30.279388   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 22:33:30.285460   35579 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 22:33:30.291392   35579 kubeadm.go:392] StartCluster: {Name:ha-158602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-158602 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.142 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.91 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:33:30.291527   35579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:33:30.291605   35579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:33:30.352539   35579 cri.go:89] found id: "fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c"
	I0827 22:33:30.352563   35579 cri.go:89] found id: "80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8"
	I0827 22:33:30.352567   35579 cri.go:89] found id: "d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468"
	I0827 22:33:30.352570   35579 cri.go:89] found id: "cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae"
	I0827 22:33:30.352573   35579 cri.go:89] found id: "88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b"
	I0827 22:33:30.352576   35579 cri.go:89] found id: "5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0"
	I0827 22:33:30.352579   35579 cri.go:89] found id: "9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e"
	I0827 22:33:30.352585   35579 cri.go:89] found id: "6d81ed0028836c65f03d647548e3e5428c3a7c3ea78c602e8859da81460f5be7"
	I0827 22:33:30.352588   35579 cri.go:89] found id: "bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25"
	I0827 22:33:30.352593   35579 cri.go:89] found id: "7d7040ed93da7173a21ab0833477864db295fa399704456dbcf15e700138abf0"
	I0827 22:33:30.352595   35579 cri.go:89] found id: "70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d"
	I0827 22:33:30.352598   35579 cri.go:89] found id: "c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3"
	I0827 22:33:30.352609   35579 cri.go:89] found id: "9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03"
	I0827 22:33:30.352612   35579 cri.go:89] found id: "79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730"
	I0827 22:33:30.352616   35579 cri.go:89] found id: "a18851305e21f1027a340b8bf10ef1035c3af99bc2265e45373c83a3f1f5310f"
	I0827 22:33:30.352619   35579 cri.go:89] found id: "eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff"
	I0827 22:33:30.352621   35579 cri.go:89] found id: "961aabfc8401a99bafb3b3f0331858223cc1ea7de147e1acf56132b3e9e34280"
	I0827 22:33:30.352625   35579 cri.go:89] found id: "ad2032c0ac6742983457c7127109c71e7fcab31d210274981fde090255dcc55d"
	I0827 22:33:30.352628   35579 cri.go:89] found id: "60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f"
	I0827 22:33:30.352630   35579 cri.go:89] found id: ""
	I0827 22:33:30.352680   35579 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.675324080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798297675301096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd619179-a08d-4cf4-994b-0a6f5c6b079c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.675866465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91ad1d08-b8d2-47ec-8281-d19908cac977 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.675933015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91ad1d08-b8d2-47ec-8281-d19908cac977 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.676360230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91ad1d08-b8d2-47ec-8281-d19908cac977 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.715992157Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=313503b3-94e6-47b1-b057-95918d3b0828 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.716068514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=313503b3-94e6-47b1-b057-95918d3b0828 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.717929016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35bd432a-410f-4ac7-82bb-a329db059343 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.719115402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798297719085114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35bd432a-410f-4ac7-82bb-a329db059343 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.724430816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10ab7033-1a8d-4de4-a757-7702f4bba28c name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.724536375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10ab7033-1a8d-4de4-a757-7702f4bba28c name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.724953186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10ab7033-1a8d-4de4-a757-7702f4bba28c name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.773122153Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abede69d-2ed8-4793-b31f-0aafb5674c74 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.773198245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abede69d-2ed8-4793-b31f-0aafb5674c74 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.774258361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=598362ea-c029-4ebf-81b0-a9e42dc19b71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.775058845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798297775032773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=598362ea-c029-4ebf-81b0-a9e42dc19b71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.776011452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14d590cc-6fc5-419d-a483-808f3b25541a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.776066477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14d590cc-6fc5-419d-a483-808f3b25541a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.776549943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14d590cc-6fc5-419d-a483-808f3b25541a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.820823535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dce7c76e-f579-4f2d-ac7d-a09caae54376 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.820895046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dce7c76e-f579-4f2d-ac7d-a09caae54376 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.822163636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b40867d-1520-45f6-a60a-5fe39086c473 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.823915366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798297823888397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b40867d-1520-45f6-a60a-5fe39086c473 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.824420664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0eb9fdf0-1a65-445c-830a-0ca6c9492207 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.824511768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0eb9fdf0-1a65-445c-830a-0ca6c9492207 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:38:17 ha-158602 crio[3814]: time="2024-08-27 22:38:17.825064942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:601495ed11e175a3151f91198bed509ed98bbfadb2ee495e26ae097001c3d1fa,PodSandboxId:12c1f8c9dbd6ef212db75c1ed6f572861f485e1b7db2c3a0c41ed01f322e4bb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724798065367079378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb20f3f202c26d63a2d0f7aa2467be2339837f3b3d10f502b25fb18690090eaf,PodSandboxId:f13a0a1ea9db9a920f17bd316d792e39266be4902749b146dd889befa4525828,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724798056280235278,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724798039344978183,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec290e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724798034342968304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9caa2a6fe79af73ebe557f23f35214e06f04993e966f652f05dc981070ba8ee2,PodSandboxId:d90d7ab1f41bd586b09c3c9f3c3d2204226a870f644fc0b5fbe1d783b649195a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724798022651694612,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de46c1cb8c2582eff6fee1f89b654d02,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a,PodSandboxId:e5bfc7f83bb70709e0bc5b1fe3c14a729abaef8fd026ce48802418e0139df101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724798010439879040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c,PodSandboxId:65041c74f2cb3a300398e47f84e0c0f93f4b3ffa7c7672f097b74661adcc3355,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009630230163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8,PodSandboxId:a285158c423b021a84715259694a1d0125222b95e0fe899f34d372c4f44e7e8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724798009449606374,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae,PodSandboxId:9d5793d8697402987b1847557b6f97b077602e37faca7b5ba862ca53703edb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724798009431042868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468,PodSandboxId:8b6b964e2cbb6b15ec4f227bbd27fa8acc531e9cd175dacd19361da32fe4e3e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724798009442365127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0,PodSandboxId:19e1d6606b8c743f66cb195ab356ac7a64f40c80d410c462bd81c8ea67bdac54,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724798009208096359,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b2
22d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b,PodSandboxId:89df003e83b42b8b5229004044f854e2f8bf57c70ac86543e69a28baa8fae022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724798009237534684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec29
0e5a1a66785f682b1d4c358afddc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e,PodSandboxId:59aa70c5f3481e99ac4da1827d36b7adc5641a3194aca043e4d4d4ecdb0a6c08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724798009041612455,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f88305a8db16adac4a0b009c
4777d1,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb94f6a77a8a864dddac2b9149ad6a474fd78d9522ebe8972be37a416330df25,PodSandboxId:ffbe4fc48196ec7df744ba98c0f64aa2f7aaa8d2e7371e308e77875185badce2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724797897343027493,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6442070-e677-44c6-ac72-4b9f8dedc67a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6577993a571ba783ae5923dcad5e0d6849e61771582ef5043d682fdba1f135e4,PodSandboxId:4f329cad0ee8c25ae2e0d764fafbe9c4032e80395de5c3e0bee74245ea0321d5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724797524660248065,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-gxvsc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f0d789d-e3bb-4ed7-a21b-7339eed1d5ce,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d,PodSandboxId:922e19e19e6b3f2001c039ad985dc0e4202cf746b64289fdc62396b6a2b15b50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386256391148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x6dcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6366bf54-23c5-475c-81a8-a0d9197e7335,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3,PodSandboxId:7e95e9aaf3336145b582dc4ecefe31bb90033260d50f14353968ff345494c14b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724797386186634188,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxzgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f0b233-f708-42e4-ad45-5a6688b3252e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03,PodSandboxId:d113f6cede364a47f013fca03dc5daa910cc7812f559af271964f5cfe8ff0044,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724797374249948045,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kb84t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094023b9-ea07-4014-a601-2e2a8b723805,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730,PodSandboxId:240775e6cca6ce0371ede66c9fb8c8f4e9718585b7d01b90bbb3deb655b90cd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724797370715607895,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5pmrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3bdfa97-3f65-4fb1-aeec-4c24a7cd4f00,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff,PodSandboxId:71d74ecb9f3009afa9acb6fec11fd06cae12e3f5e5f327d8de1a1b3352cf9fba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724797359512645497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd31968b222d7f337deac1388ea4ecd4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f,PodSandboxId:5e03fa37bf662f86376a1b7cd1edfed21bbc3761b41fcb1b1c14f7143584a94d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724797359456411226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-158602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df14f5b800d1b46b2fd76ba679833155,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0eb9fdf0-1a65-445c-830a-0ca6c9492207 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	601495ed11e17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   12c1f8c9dbd6e       storage-provisioner
	eb20f3f202c26       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   f13a0a1ea9db9       busybox-7dff88458-gxvsc
	fb475e1179e65       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   89df003e83b42       kube-controller-manager-ha-158602
	a5735ac34d1ff       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   59aa70c5f3481       kube-apiserver-ha-158602
	9caa2a6fe79af       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   d90d7ab1f41bd       kube-vip-ha-158602
	f4bc6a8b4d535       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   e5bfc7f83bb70       kube-proxy-5pmrv
	fe2fe55492557       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   65041c74f2cb3       coredns-6f6b679f8f-jxzgs
	80d6fdca5fb24       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   a285158c423b0       kindnet-kb84t
	d6671aef22454       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   8b6b964e2cbb6       coredns-6f6b679f8f-x6dcd
	cc8a19b5a2e06       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   9d5793d869740       kube-scheduler-ha-158602
	88d8ca73b340f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   1                   89df003e83b42       kube-controller-manager-ha-158602
	5d29b152972a1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   19e1d6606b8c7       etcd-ha-158602
	9de12fe017aa2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Exited              kube-apiserver            2                   59aa70c5f3481       kube-apiserver-ha-158602
	bb94f6a77a8a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       4                   ffbe4fc48196e       storage-provisioner
	6577993a571ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   4f329cad0ee8c       busybox-7dff88458-gxvsc
	70a0959d7fc34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   922e19e19e6b3       coredns-6f6b679f8f-x6dcd
	c1556743f3ed7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   7e95e9aaf3336       coredns-6f6b679f8f-jxzgs
	9006fd58dfc63       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    15 minutes ago      Exited              kindnet-cni               0                   d113f6cede364       kindnet-kb84t
	79ea4c0053fb1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      15 minutes ago      Exited              kube-proxy                0                   240775e6cca6c       kube-proxy-5pmrv
	eb6e08e1cf880       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   71d74ecb9f300       etcd-ha-158602
	60feae8b5d1f0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      15 minutes ago      Exited              kube-scheduler            0                   5e03fa37bf662       kube-scheduler-ha-158602
	
	
	==> coredns [70a0959d7fc34de06d0c50ce726e2755b39c9bcdd8a7825ecff9c940070bfb6d] <==
	[INFO] 10.244.0.4:43032 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001634431s
	[INFO] 10.244.0.4:57056 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135477s
	[INFO] 10.244.0.4:60425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128187s
	[INFO] 10.244.0.4:33910 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092983s
	[INFO] 10.244.2.2:55029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001617414s
	[INFO] 10.244.2.2:43643 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085283s
	[INFO] 10.244.2.2:33596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116719s
	[INFO] 10.244.1.2:36406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011994s
	[INFO] 10.244.1.2:45944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072161s
	[INFO] 10.244.0.4:34595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083932s
	[INFO] 10.244.0.4:56369 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051489s
	[INFO] 10.244.0.4:45069 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052963s
	[INFO] 10.244.2.2:41980 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118063s
	[INFO] 10.244.1.2:35610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170436s
	[INFO] 10.244.1.2:39033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193301s
	[INFO] 10.244.1.2:58078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123451s
	[INFO] 10.244.1.2:50059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128271s
	[INFO] 10.244.0.4:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010237s
	[INFO] 10.244.0.4:58359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080338s
	[INFO] 10.244.2.2:35482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009539s
	[INFO] 10.244.2.2:45798 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087557s
	[INFO] 10.244.2.2:39340 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090317s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1849&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [c1556743f3ed7494ab1dc0469c184c0cb51e20035a63fff0394d332b9fded5a3] <==
	[INFO] 10.244.1.2:34885 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000226601s
	[INFO] 10.244.1.2:54874 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014649s
	[INFO] 10.244.1.2:34031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187993s
	[INFO] 10.244.1.2:39560 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019907s
	[INFO] 10.244.0.4:43688 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012926s
	[INFO] 10.244.0.4:51548 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001519492s
	[INFO] 10.244.0.4:58561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052435s
	[INFO] 10.244.2.2:48091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180149s
	[INFO] 10.244.2.2:45077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104198s
	[INFO] 10.244.2.2:41789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215182s
	[INFO] 10.244.2.2:52731 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064319s
	[INFO] 10.244.2.2:43957 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126173s
	[INFO] 10.244.1.2:55420 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084801s
	[INFO] 10.244.1.2:45306 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059642s
	[INFO] 10.244.0.4:46103 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117802s
	[INFO] 10.244.2.2:39675 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191879s
	[INFO] 10.244.2.2:43022 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100522s
	[INFO] 10.244.2.2:53360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093376s
	[INFO] 10.244.0.4:36426 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132899s
	[INFO] 10.244.0.4:42082 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000167434s
	[INFO] 10.244.2.2:36926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139785s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1791&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1785&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [d6671aef22454c9533dc037e75f5ae36e3831457dc6e185cee0a8ca8f5d63468] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe2fe554925577262cdec34d970a488fd84620f503cd2a7830117784d777c15c] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-158602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:22:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:34:11 +0000   Tue, 27 Aug 2024 22:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.77
	  Hostname:    ha-158602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f393f25de7274e45b62eb7b988ece32c
	  System UUID:                f393f25d-e727-4e45-b62e-b7b988ece32c
	  Boot ID:                    a1b3c582-a6fa-4ddf-91a6-fe921f43a40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gxvsc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-jxzgs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-x6dcd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-158602                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-kb84t                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-158602             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-158602    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-5pmrv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-158602             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-158602                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m4s                   kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-158602 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-158602 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-158602 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-158602 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Warning  ContainerGCFailed        5m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m57s (x3 over 5m46s)  kubelet          Node ha-158602 status is now: NodeNotReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-158602 event: Registered Node ha-158602 in Controller
	
	
	Name:               ha-158602-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_23_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:23:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:38:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:35:01 +0000   Tue, 27 Aug 2024 22:34:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    ha-158602-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b63e2f54de44a9e8ad7eb0ee8626bfb
	  System UUID:                1b63e2f5-4de4-4a9e-8ad7-eb0ee8626bfb
	  Boot ID:                    28954d1d-3c7c-4000-b674-990248834daf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-crtgh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-158602-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-zmc6v                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-158602-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-158602-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-slgmm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-158602-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-158602-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m56s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-158602-m02 status is now: NodeNotReady
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-158602-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-158602-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-158602-m02 event: Registered Node ha-158602-m02 in Controller
	
	
	Name:               ha-158602-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-158602-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=ha-158602
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:25:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-158602-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:35:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:36:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:36:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:36:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 27 Aug 2024 22:35:31 +0000   Tue, 27 Aug 2024 22:36:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-158602-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad10535aaed444b79090a76efa3929c7
	  System UUID:                ad10535a-aed4-44b7-9090-a76efa3929c7
	  Boot ID:                    7e6d87ac-956c-465c-9c5a-34c53f3cdbb7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gq5t8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-c6szl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-658sj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-158602-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   NodeNotReady             3m29s                  node-controller  Node ha-158602-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-158602-m04 event: Registered Node ha-158602-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-158602-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-158602-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-158602-m04 has been rebooted, boot id: 7e6d87ac-956c-465c-9c5a-34c53f3cdbb7
	  Normal   NodeReady                2m47s                  kubelet          Node ha-158602-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-158602-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.054656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053782] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.198923] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125102] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.284457] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.718918] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +3.171591] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.060183] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.161491] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.086175] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.529529] kauditd_printk_skb: 21 callbacks suppressed
	[Aug27 22:23] kauditd_printk_skb: 38 callbacks suppressed
	[ +39.211142] kauditd_printk_skb: 26 callbacks suppressed
	[Aug27 22:30] kauditd_printk_skb: 1 callbacks suppressed
	[Aug27 22:33] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.147661] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[  +0.157117] systemd-fstab-generator[3753]: Ignoring "noauto" option for root device
	[  +0.174500] systemd-fstab-generator[3767]: Ignoring "noauto" option for root device
	[  +0.148291] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.313486] systemd-fstab-generator[3807]: Ignoring "noauto" option for root device
	[  +1.634470] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[  +2.520646] kauditd_printk_skb: 227 callbacks suppressed
	[ +23.556037] kauditd_printk_skb: 5 callbacks suppressed
	[Aug27 22:34] kauditd_printk_skb: 2 callbacks suppressed
	[Aug27 22:35] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [5d29b152972a148849046d80694ea538b666236823428d251ca8c4f020e67cf0] <==
	{"level":"info","ts":"2024-08-27T22:34:58.632092Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"226361457cf4c252","to":"94fcd24071fd3def","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-27T22:34:58.632126Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:34:58.659935Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:00.040360Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-27T22:35:00.043751Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"0s","error":"dial tcp 192.168.39.91:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-27T22:35:44.471549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226361457cf4c252 switched to configuration voters=(2477931171060957778 10560038235854091378)"}
	{"level":"info","ts":"2024-08-27T22:35:44.473876Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b43d13dd46d94ad8","local-member-id":"226361457cf4c252","removed-remote-peer-id":"94fcd24071fd3def","removed-remote-peer-urls":["https://192.168.39.91:2380"]}
	{"level":"info","ts":"2024-08-27T22:35:44.473977Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.474045Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"226361457cf4c252","removed-member-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.474176Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-27T22:35:44.474377Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.474494Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.474835Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.474914Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.475014Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.475382Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def","error":"context canceled"}
	{"level":"warn","ts":"2024-08-27T22:35:44.475521Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"94fcd24071fd3def","error":"failed to read 94fcd24071fd3def on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-27T22:35:44.475581Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.475845Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def","error":"context canceled"}
	{"level":"info","ts":"2024-08-27T22:35:44.475921Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.475975Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.476021Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"226361457cf4c252","removed-remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:35:44.476073Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"226361457cf4c252","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.493833Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"226361457cf4c252","remote-peer-id-stream-handler":"226361457cf4c252","remote-peer-id-from":"94fcd24071fd3def"}
	{"level":"warn","ts":"2024-08-27T22:35:44.493850Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"226361457cf4c252","remote-peer-id-stream-handler":"226361457cf4c252","remote-peer-id-from":"94fcd24071fd3def"}
	
	
	==> etcd [eb6e08e1cf880082c45acf8984f8eb5fd61a73c3676d119c636e189c9eb0c3ff] <==
	{"level":"warn","ts":"2024-08-27T22:31:54.561474Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94fcd24071fd3def","rtt":"7.811127ms","error":"dial tcp 192.168.39.91:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-27T22:31:54.561509Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94fcd24071fd3def","rtt":"875.28µs","error":"dial tcp 192.168.39.91:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-27T22:31:54.626265Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:31:54.626318Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.77:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T22:31:54.626395Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"226361457cf4c252","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-27T22:31:54.626614Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626634Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626655Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626751Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626815Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626884Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626928Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"928ccad376a03472"}
	{"level":"info","ts":"2024-08-27T22:31:54.626953Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627002Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627072Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627169Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627222Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627282Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"226361457cf4c252","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.627326Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94fcd24071fd3def"}
	{"level":"info","ts":"2024-08-27T22:31:54.630889Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"warn","ts":"2024-08-27T22:31:54.630923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.134611451s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-27T22:31:54.631041Z","caller":"traceutil/trace.go:171","msg":"trace[1544599587] range","detail":"{range_begin:; range_end:; }","duration":"9.13474934s","start":"2024-08-27T22:31:45.496280Z","end":"2024-08-27T22:31:54.631030Z","steps":["trace[1544599587] 'agreement among raft nodes before linearized reading'  (duration: 9.134608012s)"],"step_count":1}
	{"level":"error","ts":"2024-08-27T22:31:54.631108Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-27T22:31:54.631154Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.77:2380"}
	{"level":"info","ts":"2024-08-27T22:31:54.631322Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-158602","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.77:2380"],"advertise-client-urls":["https://192.168.39.77:2379"]}
	
	
	==> kernel <==
	 22:38:18 up 16 min,  0 users,  load average: 0.17, 0.46, 0.33
	Linux ha-158602 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [80d6fdca5fb24efd7c8ed9b2d5a82f885596f6600f4c7b12b41f4ec69f32a2c8] <==
	I0827 22:37:30.790408       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:37:40.790538       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:37:40.790701       1 main.go:299] handling current node
	I0827 22:37:40.790742       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:37:40.790761       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:37:40.790958       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:37:40.790986       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:37:50.791067       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:37:50.791112       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:37:50.791313       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:37:50.791338       1 main.go:299] handling current node
	I0827 22:37:50.791376       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:37:50.791382       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:38:00.795651       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:38:00.795792       1 main.go:299] handling current node
	I0827 22:38:00.795821       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:38:00.795839       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:38:00.796053       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:38:00.796096       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:38:10.789680       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:38:10.789753       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:38:10.789923       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:38:10.789931       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:38:10.790027       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:38:10.790049       1 main.go:299] handling current node
	
	
	==> kindnet [9006fd58dfc634f72d821b784b6a7389e63fe22f056d1a03e97fd0372cb65a03] <==
	I0827 22:31:15.263572       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:25.271183       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:25.271236       1 main.go:299] handling current node
	I0827 22:31:25.271260       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:25.271266       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:25.271392       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:25.271409       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:25.271530       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:25.271551       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:35.262560       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:35.262707       1 main.go:299] handling current node
	I0827 22:31:35.262736       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:35.262756       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:35.262940       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:35.262968       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	I0827 22:31:35.263040       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:35.263061       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:45.270624       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0827 22:31:45.270698       1 main.go:322] Node ha-158602-m04 has CIDR [10.244.3.0/24] 
	I0827 22:31:45.270914       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0827 22:31:45.270938       1 main.go:299] handling current node
	I0827 22:31:45.270970       1 main.go:295] Handling node with IPs: map[192.168.39.142:{}]
	I0827 22:31:45.270976       1 main.go:322] Node ha-158602-m02 has CIDR [10.244.1.0/24] 
	I0827 22:31:45.271058       1 main.go:295] Handling node with IPs: map[192.168.39.91:{}]
	I0827 22:31:45.271079       1 main.go:322] Node ha-158602-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9de12fe017aa2a8895798629886be27998305041ab9a740501f1b03fe96e215e] <==
	I0827 22:33:29.571884       1 options.go:228] external host was not specified, using 192.168.39.77
	I0827 22:33:29.574178       1 server.go:142] Version: v1.31.0
	I0827 22:33:29.574237       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0827 22:33:29.575120       1 run.go:72] "command failed" err="tls: private key does not match public key"
	
	
	==> kube-apiserver [a5735ac34d1ffe3a62881f2b13039e3a6a7b0cca4c32acb9df76b583e967f461] <==
	I0827 22:34:05.720189       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 22:34:05.720831       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 22:34:05.809006       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 22:34:05.809032       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 22:34:05.809143       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 22:34:05.809599       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 22:34:05.809612       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0827 22:34:05.809896       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 22:34:05.810349       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 22:34:05.810488       1 aggregator.go:171] initial CRD sync complete...
	I0827 22:34:05.810526       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 22:34:05.810549       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 22:34:05.810571       1 cache.go:39] Caches are synced for autoregister controller
	I0827 22:34:05.810727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 22:34:05.815210       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 22:34:05.822058       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0827 22:34:05.824998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.91]
	I0827 22:34:05.827259       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 22:34:05.827386       1 policy_source.go:224] refreshing policies
	I0827 22:34:05.827855       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:34:05.839640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0827 22:34:05.847735       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0827 22:34:05.908115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 22:34:06.720577       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0827 22:34:07.165422       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.77 192.168.39.91]
	
	
	==> kube-controller-manager [88d8ca73b340fcd43d1df4b3026dc4ab68a5375075e42c18fa26c771ea5b479b] <==
	I0827 22:33:30.326127       1 serving.go:386] Generated self-signed cert in-memory
	I0827 22:33:30.740538       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0827 22:33:30.740628       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:33:30.742489       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 22:33:30.742680       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 22:33:30.743151       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0827 22:33:30.743240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0827 22:33:40.746423       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.77:8443/healthz\": dial tcp 192.168.39.77:8443: connect: connection refused"
	
	
	==> kube-controller-manager [fb475e1179e65dedb063e564ae587677f1c428cb3f3c8afea7bf48045b905e43] <==
	I0827 22:36:33.378838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:36:33.404239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:36:33.445387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.135718ms"
	I0827 22:36:33.446098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.86µs"
	I0827 22:36:34.540511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	I0827 22:36:38.516536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-158602-m04"
	E0827 22:36:49.188849       1 gc_controller.go:151] "Failed to get node" err="node \"ha-158602-m03\" not found" logger="pod-garbage-collector-controller" node="ha-158602-m03"
	E0827 22:36:49.189006       1 gc_controller.go:151] "Failed to get node" err="node \"ha-158602-m03\" not found" logger="pod-garbage-collector-controller" node="ha-158602-m03"
	E0827 22:36:49.189052       1 gc_controller.go:151] "Failed to get node" err="node \"ha-158602-m03\" not found" logger="pod-garbage-collector-controller" node="ha-158602-m03"
	E0827 22:36:49.189086       1 gc_controller.go:151] "Failed to get node" err="node \"ha-158602-m03\" not found" logger="pod-garbage-collector-controller" node="ha-158602-m03"
	E0827 22:36:49.189110       1 gc_controller.go:151] "Failed to get node" err="node \"ha-158602-m03\" not found" logger="pod-garbage-collector-controller" node="ha-158602-m03"
	I0827 22:36:49.200329       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-158602-m03"
	I0827 22:36:49.228904       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-158602-m03"
	I0827 22:36:49.228944       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-158602-m03"
	I0827 22:36:49.257676       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-158602-m03"
	I0827 22:36:49.257788       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-158602-m03"
	I0827 22:36:49.288341       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-158602-m03"
	I0827 22:36:49.288383       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-158602-m03"
	I0827 22:36:49.324149       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-158602-m03"
	I0827 22:36:49.324191       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nhjgk"
	I0827 22:36:49.360248       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nhjgk"
	I0827 22:36:49.360282       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-158602-m03"
	I0827 22:36:49.383292       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-158602-m03"
	I0827 22:36:49.383371       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9wgcl"
	I0827 22:36:49.421235       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9wgcl"
	
	
	==> kube-proxy [79ea4c0053fb1d60f2b0748b057c7fe5f8b2cd7633298fc55465e675a3591730] <==
	E0827 22:30:43.356287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:43.356676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:43.357108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:43.356822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:43.357209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:50.523883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	W0827 22:30:50.524979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:50.525102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:30:50.524763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:30:50.525161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0827 22:30:50.525043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.812648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.812910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.813189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.813303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:02.813887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:02.813951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:24.316914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:24.317749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:27.389576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:27.390372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1840\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:27.390666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:27.390741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1745\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0827 22:31:51.964727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841": dial tcp 192.168.39.254:8443: connect: no route to host
	E0827 22:31:51.964897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-158602&resourceVersion=1841\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [f4bc6a8b4d5356a0c41d50b38422893e620264eff95b6a12ec1233c6edfd617a] <==
	E0827 22:33:33.340920       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:36.412679       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:39.485096       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:45.628024       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0827 22:33:54.844710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-158602\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0827 22:34:13.293020       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.77"]
	E0827 22:34:13.293128       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:34:13.327281       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:34:13.327327       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:34:13.327356       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:34:13.329746       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:34:13.330126       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:34:13.330173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:34:13.331640       1 config.go:197] "Starting service config controller"
	I0827 22:34:13.331705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:34:13.331747       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:34:13.331777       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:34:13.341506       1 config.go:326] "Starting node config controller"
	I0827 22:34:13.341518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:34:13.432521       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:34:13.432533       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:34:13.441589       1 shared_informer.go:320] Caches are synced for node config
	W0827 22:36:58.648067       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0827 22:36:58.648192       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0827 22:36:58.648227       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [60feae8b5d1f0defccdc7c564564d68d82cf8e72719225577c4fad82dcf73b7f] <==
	E0827 22:22:43.769600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:22:43.826851       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:22:43.828024       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:22:46.441310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:25:57.773909       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.774761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-658sj\": pod kube-proxy-658sj is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-658sj"
	I0827 22:25:57.775154       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-658sj" node="ha-158602-m04"
	E0827 22:25:57.831035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:25:57.831164       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9f48452c-8a4b-403b-9da9-90f2dab5ec70(kube-system/kube-proxy-d6zj9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d6zj9"
	E0827 22:25:57.831230       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d6zj9\": pod kube-proxy-d6zj9 is already assigned to node \"ha-158602-m04\"" pod="kube-system/kube-proxy-d6zj9"
	I0827 22:25:57.831281       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d6zj9" node="ha-158602-m04"
	E0827 22:31:45.963326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:47.585165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0827 22:31:47.598816       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0827 22:31:49.310966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0827 22:31:49.394751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0827 22:31:49.884082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:50.052341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0827 22:31:50.176477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0827 22:31:50.298800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0827 22:31:50.357582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0827 22:31:50.682040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0827 22:31:51.093515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0827 22:31:53.076340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0827 22:31:54.328325       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc8a19b5a2e068d583747b3cc0d49a5939f3e058331d163bdd4e67e320885eae] <==
	W0827 22:33:49.267259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:49.267389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:49.398410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:49.398613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:50.288430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.77:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:50.288584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:50.385924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:50.385988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:33:51.689168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.77:8443: connect: connection refused
	E0827 22:33:51.689288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError"
	W0827 22:34:05.745876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 22:34:05.745929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 22:34:05.746146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:34:05.746245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.746316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:34:05.746340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:34:05.751966       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:34:05.752010       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0827 22:34:54.584690       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:35:41.195587       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gq5t8\": pod busybox-7dff88458-gq5t8 is already assigned to node \"ha-158602-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-gq5t8" node="ha-158602-m04"
	E0827 22:35:41.198315       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9e78b99d-78ad-4270-914c-25ea30134e10(default/busybox-7dff88458-gq5t8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-gq5t8"
	E0827 22:35:41.198492       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-gq5t8\": pod busybox-7dff88458-gq5t8 is already assigned to node \"ha-158602-m04\"" pod="default/busybox-7dff88458-gq5t8"
	I0827 22:35:41.198586       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-gq5t8" node="ha-158602-m04"
	
	
	==> kubelet <==
	Aug 27 22:36:45 ha-158602 kubelet[1308]: E0827 22:36:45.546518    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798205546100006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:36:45 ha-158602 kubelet[1308]: E0827 22:36:45.546556    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798205546100006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:36:55 ha-158602 kubelet[1308]: E0827 22:36:55.548919    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798215547850144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:36:55 ha-158602 kubelet[1308]: E0827 22:36:55.548965    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798215547850144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:05 ha-158602 kubelet[1308]: E0827 22:37:05.551726    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798225551359659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:05 ha-158602 kubelet[1308]: E0827 22:37:05.552124    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798225551359659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:15 ha-158602 kubelet[1308]: E0827 22:37:15.553696    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798235553379093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:15 ha-158602 kubelet[1308]: E0827 22:37:15.553761    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798235553379093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:25 ha-158602 kubelet[1308]: E0827 22:37:25.558729    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798245555312535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:25 ha-158602 kubelet[1308]: E0827 22:37:25.558777    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798245555312535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:35 ha-158602 kubelet[1308]: E0827 22:37:35.561255    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798255560886183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:35 ha-158602 kubelet[1308]: E0827 22:37:35.561316    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798255560886183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:45 ha-158602 kubelet[1308]: E0827 22:37:45.370714    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:37:45 ha-158602 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:37:45 ha-158602 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:37:45 ha-158602 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:37:45 ha-158602 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:37:45 ha-158602 kubelet[1308]: E0827 22:37:45.565233    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798265564484645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:45 ha-158602 kubelet[1308]: E0827 22:37:45.565283    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798265564484645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:55 ha-158602 kubelet[1308]: E0827 22:37:55.567019    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798275566670251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:37:55 ha-158602 kubelet[1308]: E0827 22:37:55.567369    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798275566670251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:38:05 ha-158602 kubelet[1308]: E0827 22:38:05.571414    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798285568937319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:38:05 ha-158602 kubelet[1308]: E0827 22:38:05.571621    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798285568937319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:38:15 ha-158602 kubelet[1308]: E0827 22:38:15.573596    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798295573258409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:38:15 ha-158602 kubelet[1308]: E0827 22:38:15.573637    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724798295573258409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 22:38:17.402180   37922 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19522-7571/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-158602 -n ha-158602
helpers_test.go:261: (dbg) Run:  kubectl --context ha-158602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465478
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-465478
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-465478: exit status 82 (2m1.779425442s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-465478-m03"  ...
	* Stopping node "multinode-465478-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-465478" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465478 --wait=true -v=8 --alsologtostderr
E0827 22:56:21.248423   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465478 --wait=true -v=8 --alsologtostderr: (3m28.896036934s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465478
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-465478 -n multinode-465478
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-465478 logs -n 25: (1.396449642s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478:/home/docker/cp-test_multinode-465478-m02_multinode-465478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478 sudo cat                                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m02_multinode-465478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03:/home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478-m03 sudo cat                                   | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp testdata/cp-test.txt                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478:/home/docker/cp-test_multinode-465478-m03_multinode-465478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478 sudo cat                                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02:/home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478-m02 sudo cat                                   | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-465478 node stop m03                                                          | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	| node    | multinode-465478 node start                                                             | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:53 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| stop    | -p multinode-465478                                                                     | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| start   | -p multinode-465478                                                                     | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:55 UTC | 27 Aug 24 22:58 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:55:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:55:13.022109   47307 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:55:13.022223   47307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:55:13.022233   47307 out.go:358] Setting ErrFile to fd 2...
	I0827 22:55:13.022239   47307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:55:13.022420   47307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:55:13.022954   47307 out.go:352] Setting JSON to false
	I0827 22:55:13.023914   47307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5860,"bootTime":1724793453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:55:13.023971   47307 start.go:139] virtualization: kvm guest
	I0827 22:55:13.026108   47307 out.go:177] * [multinode-465478] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:55:13.027395   47307 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:55:13.027400   47307 notify.go:220] Checking for updates...
	I0827 22:55:13.030743   47307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:55:13.031982   47307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:55:13.033384   47307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:55:13.034902   47307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:55:13.036175   47307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:55:13.037864   47307 config.go:182] Loaded profile config "multinode-465478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:55:13.037963   47307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:55:13.038383   47307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:55:13.038430   47307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:55:13.062494   47307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0827 22:55:13.062937   47307 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:55:13.063569   47307 main.go:141] libmachine: Using API Version  1
	I0827 22:55:13.063598   47307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:55:13.063916   47307 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:55:13.064096   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.099184   47307 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 22:55:13.100530   47307 start.go:297] selected driver: kvm2
	I0827 22:55:13.100545   47307 start.go:901] validating driver "kvm2" against &{Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:55:13.100668   47307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:55:13.100962   47307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:55:13.101039   47307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:55:13.116211   47307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:55:13.116972   47307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:55:13.117009   47307 cni.go:84] Creating CNI manager for ""
	I0827 22:55:13.117015   47307 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0827 22:55:13.117081   47307 start.go:340] cluster config:
	{Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:55:13.117193   47307 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:55:13.119824   47307 out.go:177] * Starting "multinode-465478" primary control-plane node in "multinode-465478" cluster
	I0827 22:55:13.121319   47307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:55:13.121353   47307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:55:13.121363   47307 cache.go:56] Caching tarball of preloaded images
	I0827 22:55:13.121432   47307 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:55:13.121442   47307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:55:13.121560   47307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/config.json ...
	I0827 22:55:13.121777   47307 start.go:360] acquireMachinesLock for multinode-465478: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:55:13.121813   47307 start.go:364] duration metric: took 20.608µs to acquireMachinesLock for "multinode-465478"
	I0827 22:55:13.121828   47307 start.go:96] Skipping create...Using existing machine configuration
	I0827 22:55:13.121833   47307 fix.go:54] fixHost starting: 
	I0827 22:55:13.122077   47307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:55:13.122107   47307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:55:13.136366   47307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0827 22:55:13.136922   47307 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:55:13.137384   47307 main.go:141] libmachine: Using API Version  1
	I0827 22:55:13.137398   47307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:55:13.137793   47307 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:55:13.138028   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.138184   47307 main.go:141] libmachine: (multinode-465478) Calling .GetState
	I0827 22:55:13.139845   47307 fix.go:112] recreateIfNeeded on multinode-465478: state=Running err=<nil>
	W0827 22:55:13.139866   47307 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 22:55:13.141885   47307 out.go:177] * Updating the running kvm2 "multinode-465478" VM ...
	I0827 22:55:13.143234   47307 machine.go:93] provisionDockerMachine start ...
	I0827 22:55:13.143252   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.143454   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.145771   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.146208   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.146237   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.146352   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.146522   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.146642   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.146859   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.147025   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.147231   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.147245   47307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 22:55:13.257982   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-465478
	
	I0827 22:55:13.258036   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.258270   47307 buildroot.go:166] provisioning hostname "multinode-465478"
	I0827 22:55:13.258294   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.258504   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.261289   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.261671   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.261694   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.261851   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.262031   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.262196   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.262294   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.262421   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.262600   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.262616   47307 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-465478 && echo "multinode-465478" | sudo tee /etc/hostname
	I0827 22:55:13.383433   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-465478
	
	I0827 22:55:13.383469   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.386293   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.386701   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.386725   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.386955   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.387156   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.387342   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.387524   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.387701   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.387942   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.387966   47307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-465478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-465478/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-465478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:55:13.497064   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:55:13.497088   47307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:55:13.497114   47307 buildroot.go:174] setting up certificates
	I0827 22:55:13.497123   47307 provision.go:84] configureAuth start
	I0827 22:55:13.497131   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.497417   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:55:13.499930   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.500265   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.500288   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.500500   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.502526   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.502867   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.502899   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.503013   47307 provision.go:143] copyHostCerts
	I0827 22:55:13.503039   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:55:13.503071   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:55:13.503086   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:55:13.503154   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:55:13.503245   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:55:13.503269   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:55:13.503273   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:55:13.503297   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:55:13.503351   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:55:13.503366   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:55:13.503373   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:55:13.503392   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:55:13.503442   47307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.multinode-465478 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-465478]
	I0827 22:55:13.572714   47307 provision.go:177] copyRemoteCerts
	I0827 22:55:13.572780   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:55:13.572803   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.575491   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.575848   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.575875   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.576065   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.576231   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.576366   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.576549   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:55:13.658322   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:55:13.658392   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:55:13.682056   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:55:13.682125   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0827 22:55:13.705603   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:55:13.705685   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:55:13.728103   47307 provision.go:87] duration metric: took 230.968024ms to configureAuth
	I0827 22:55:13.728134   47307 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:55:13.728348   47307 config.go:182] Loaded profile config "multinode-465478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:55:13.728437   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.730767   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.731119   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.731149   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.731279   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.731440   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.731634   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.731774   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.731977   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.732130   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.732145   47307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:56:44.362478   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:56:44.362507   47307 machine.go:96] duration metric: took 1m31.219260749s to provisionDockerMachine
	I0827 22:56:44.362520   47307 start.go:293] postStartSetup for "multinode-465478" (driver="kvm2")
	I0827 22:56:44.362530   47307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:56:44.362545   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.362876   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:56:44.362898   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.366284   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.366734   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.366754   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.366953   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.367126   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.367271   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.367387   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.451066   47307 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:56:44.454909   47307 command_runner.go:130] > NAME=Buildroot
	I0827 22:56:44.454932   47307 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0827 22:56:44.454937   47307 command_runner.go:130] > ID=buildroot
	I0827 22:56:44.454942   47307 command_runner.go:130] > VERSION_ID=2023.02.9
	I0827 22:56:44.454946   47307 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0827 22:56:44.455033   47307 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:56:44.455053   47307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:56:44.455119   47307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:56:44.455199   47307 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:56:44.455209   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:56:44.455298   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:56:44.463978   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:56:44.490185   47307 start.go:296] duration metric: took 127.652878ms for postStartSetup
	I0827 22:56:44.490230   47307 fix.go:56] duration metric: took 1m31.368396123s for fixHost
	I0827 22:56:44.490251   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.493107   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.493583   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.493616   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.493762   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.493997   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.494132   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.494260   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.494408   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:56:44.494587   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:56:44.494601   47307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:56:44.604942   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724799404.577277528
	
	I0827 22:56:44.604967   47307 fix.go:216] guest clock: 1724799404.577277528
	I0827 22:56:44.604976   47307 fix.go:229] Guest: 2024-08-27 22:56:44.577277528 +0000 UTC Remote: 2024-08-27 22:56:44.490235835 +0000 UTC m=+91.505291952 (delta=87.041693ms)
	I0827 22:56:44.605000   47307 fix.go:200] guest clock delta is within tolerance: 87.041693ms
	I0827 22:56:44.605006   47307 start.go:83] releasing machines lock for "multinode-465478", held for 1m31.48318308s
	I0827 22:56:44.605025   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.605267   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:56:44.607648   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.607984   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.608012   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.608167   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608656   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608798   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608942   47307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:56:44.608992   47307 ssh_runner.go:195] Run: cat /version.json
	I0827 22:56:44.608993   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.609006   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.611539   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611555   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611879   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.611897   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611919   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.611936   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.612074   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.612181   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.612244   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.612330   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.612372   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.612430   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.612495   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.612546   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.715960   47307 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0827 22:56:44.716727   47307 command_runner.go:130] > {"iso_version": "v1.33.1-1724692311-19511", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "ab8c74129ca11fc20d41e21bf0a04c3a21513cf7"}
	I0827 22:56:44.716877   47307 ssh_runner.go:195] Run: systemctl --version
	I0827 22:56:44.722549   47307 command_runner.go:130] > systemd 252 (252)
	I0827 22:56:44.722582   47307 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0827 22:56:44.722642   47307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:56:44.877748   47307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 22:56:44.883265   47307 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0827 22:56:44.883308   47307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:56:44.883365   47307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:56:44.891868   47307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 22:56:44.891888   47307 start.go:495] detecting cgroup driver to use...
	I0827 22:56:44.891945   47307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:56:44.907441   47307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:56:44.921695   47307 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:56:44.921772   47307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:56:44.936211   47307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:56:44.949085   47307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:56:45.085594   47307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:56:45.265455   47307 docker.go:233] disabling docker service ...
	I0827 22:56:45.265512   47307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:56:45.309840   47307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:56:45.324708   47307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:56:45.515571   47307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:56:45.681509   47307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:56:45.695587   47307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:56:45.713404   47307 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0827 22:56:45.713440   47307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:56:45.713497   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.723378   47307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:56:45.723443   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.734027   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.745925   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.757035   47307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:56:45.768068   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.778920   47307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.789117   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.799783   47307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:56:45.809670   47307 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0827 22:56:45.809726   47307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:56:45.819664   47307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:56:45.964148   47307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:56:55.749546   47307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.785362269s)
	I0827 22:56:55.749583   47307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:56:55.749630   47307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:56:55.754100   47307 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0827 22:56:55.754115   47307 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0827 22:56:55.754121   47307 command_runner.go:130] > Device: 0,22	Inode: 1402        Links: 1
	I0827 22:56:55.754128   47307 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0827 22:56:55.754133   47307 command_runner.go:130] > Access: 2024-08-27 22:56:55.609318101 +0000
	I0827 22:56:55.754139   47307 command_runner.go:130] > Modify: 2024-08-27 22:56:55.585316377 +0000
	I0827 22:56:55.754144   47307 command_runner.go:130] > Change: 2024-08-27 22:56:55.585316377 +0000
	I0827 22:56:55.754148   47307 command_runner.go:130] >  Birth: -
	I0827 22:56:55.754225   47307 start.go:563] Will wait 60s for crictl version
	I0827 22:56:55.754273   47307 ssh_runner.go:195] Run: which crictl
	I0827 22:56:55.757629   47307 command_runner.go:130] > /usr/bin/crictl
	I0827 22:56:55.757681   47307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:56:55.793073   47307 command_runner.go:130] > Version:  0.1.0
	I0827 22:56:55.793113   47307 command_runner.go:130] > RuntimeName:  cri-o
	I0827 22:56:55.793118   47307 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0827 22:56:55.793123   47307 command_runner.go:130] > RuntimeApiVersion:  v1
	I0827 22:56:55.794117   47307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:56:55.794196   47307 ssh_runner.go:195] Run: crio --version
	I0827 22:56:55.819761   47307 command_runner.go:130] > crio version 1.29.1
	I0827 22:56:55.819783   47307 command_runner.go:130] > Version:        1.29.1
	I0827 22:56:55.819789   47307 command_runner.go:130] > GitCommit:      unknown
	I0827 22:56:55.819793   47307 command_runner.go:130] > GitCommitDate:  unknown
	I0827 22:56:55.819797   47307 command_runner.go:130] > GitTreeState:   clean
	I0827 22:56:55.819802   47307 command_runner.go:130] > BuildDate:      2024-08-26T22:48:20Z
	I0827 22:56:55.819807   47307 command_runner.go:130] > GoVersion:      go1.21.6
	I0827 22:56:55.819811   47307 command_runner.go:130] > Compiler:       gc
	I0827 22:56:55.819815   47307 command_runner.go:130] > Platform:       linux/amd64
	I0827 22:56:55.819819   47307 command_runner.go:130] > Linkmode:       dynamic
	I0827 22:56:55.819823   47307 command_runner.go:130] > BuildTags:      
	I0827 22:56:55.819827   47307 command_runner.go:130] >   containers_image_ostree_stub
	I0827 22:56:55.819831   47307 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0827 22:56:55.819834   47307 command_runner.go:130] >   btrfs_noversion
	I0827 22:56:55.819840   47307 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0827 22:56:55.819844   47307 command_runner.go:130] >   libdm_no_deferred_remove
	I0827 22:56:55.819847   47307 command_runner.go:130] >   seccomp
	I0827 22:56:55.819851   47307 command_runner.go:130] > LDFlags:          unknown
	I0827 22:56:55.819855   47307 command_runner.go:130] > SeccompEnabled:   true
	I0827 22:56:55.819859   47307 command_runner.go:130] > AppArmorEnabled:  false
	I0827 22:56:55.820950   47307 ssh_runner.go:195] Run: crio --version
	I0827 22:56:55.847712   47307 command_runner.go:130] > crio version 1.29.1
	I0827 22:56:55.847736   47307 command_runner.go:130] > Version:        1.29.1
	I0827 22:56:55.847741   47307 command_runner.go:130] > GitCommit:      unknown
	I0827 22:56:55.847745   47307 command_runner.go:130] > GitCommitDate:  unknown
	I0827 22:56:55.847749   47307 command_runner.go:130] > GitTreeState:   clean
	I0827 22:56:55.847754   47307 command_runner.go:130] > BuildDate:      2024-08-26T22:48:20Z
	I0827 22:56:55.847767   47307 command_runner.go:130] > GoVersion:      go1.21.6
	I0827 22:56:55.847771   47307 command_runner.go:130] > Compiler:       gc
	I0827 22:56:55.847775   47307 command_runner.go:130] > Platform:       linux/amd64
	I0827 22:56:55.847778   47307 command_runner.go:130] > Linkmode:       dynamic
	I0827 22:56:55.847783   47307 command_runner.go:130] > BuildTags:      
	I0827 22:56:55.847787   47307 command_runner.go:130] >   containers_image_ostree_stub
	I0827 22:56:55.847792   47307 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0827 22:56:55.847795   47307 command_runner.go:130] >   btrfs_noversion
	I0827 22:56:55.847800   47307 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0827 22:56:55.847804   47307 command_runner.go:130] >   libdm_no_deferred_remove
	I0827 22:56:55.847807   47307 command_runner.go:130] >   seccomp
	I0827 22:56:55.847811   47307 command_runner.go:130] > LDFlags:          unknown
	I0827 22:56:55.847815   47307 command_runner.go:130] > SeccompEnabled:   true
	I0827 22:56:55.847819   47307 command_runner.go:130] > AppArmorEnabled:  false
	I0827 22:56:55.850808   47307 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:56:55.852206   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:56:55.855086   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:55.855439   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:55.855457   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:55.855712   47307 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:56:55.859528   47307 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0827 22:56:55.859640   47307 kubeadm.go:883] updating cluster {Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:56:55.859820   47307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:56:55.859875   47307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:56:55.899917   47307 command_runner.go:130] > {
	I0827 22:56:55.899942   47307 command_runner.go:130] >   "images": [
	I0827 22:56:55.899948   47307 command_runner.go:130] >     {
	I0827 22:56:55.899968   47307 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0827 22:56:55.899975   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900006   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0827 22:56:55.900019   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900026   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900059   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0827 22:56:55.900071   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0827 22:56:55.900080   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900086   47307 command_runner.go:130] >       "size": "87165492",
	I0827 22:56:55.900092   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900102   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900108   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900114   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900120   47307 command_runner.go:130] >     },
	I0827 22:56:55.900128   47307 command_runner.go:130] >     {
	I0827 22:56:55.900138   47307 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0827 22:56:55.900147   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900156   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0827 22:56:55.900162   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900169   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900183   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0827 22:56:55.900197   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0827 22:56:55.900205   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900212   47307 command_runner.go:130] >       "size": "87190579",
	I0827 22:56:55.900220   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900235   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900243   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900250   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900257   47307 command_runner.go:130] >     },
	I0827 22:56:55.900263   47307 command_runner.go:130] >     {
	I0827 22:56:55.900273   47307 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0827 22:56:55.900282   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900289   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0827 22:56:55.900298   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900305   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900319   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0827 22:56:55.900335   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0827 22:56:55.900341   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900346   47307 command_runner.go:130] >       "size": "1363676",
	I0827 22:56:55.900352   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900356   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900360   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900366   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900369   47307 command_runner.go:130] >     },
	I0827 22:56:55.900372   47307 command_runner.go:130] >     {
	I0827 22:56:55.900379   47307 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0827 22:56:55.900384   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900391   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0827 22:56:55.900394   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900400   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900413   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0827 22:56:55.900432   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0827 22:56:55.900438   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900442   47307 command_runner.go:130] >       "size": "31470524",
	I0827 22:56:55.900446   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900452   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900455   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900460   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900481   47307 command_runner.go:130] >     },
	I0827 22:56:55.900489   47307 command_runner.go:130] >     {
	I0827 22:56:55.900498   47307 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0827 22:56:55.900508   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900516   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0827 22:56:55.900525   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900532   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900543   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0827 22:56:55.900552   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0827 22:56:55.900558   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900566   47307 command_runner.go:130] >       "size": "61245718",
	I0827 22:56:55.900572   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900576   47307 command_runner.go:130] >       "username": "nonroot",
	I0827 22:56:55.900580   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900590   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900595   47307 command_runner.go:130] >     },
	I0827 22:56:55.900599   47307 command_runner.go:130] >     {
	I0827 22:56:55.900604   47307 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0827 22:56:55.900608   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900615   47307 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0827 22:56:55.900619   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900625   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900632   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0827 22:56:55.900641   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0827 22:56:55.900653   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900659   47307 command_runner.go:130] >       "size": "149009664",
	I0827 22:56:55.900663   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900667   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900673   47307 command_runner.go:130] >       },
	I0827 22:56:55.900680   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900689   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900695   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900700   47307 command_runner.go:130] >     },
	I0827 22:56:55.900709   47307 command_runner.go:130] >     {
	I0827 22:56:55.900717   47307 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0827 22:56:55.900727   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900736   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0827 22:56:55.900744   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900751   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900765   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0827 22:56:55.900780   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0827 22:56:55.900788   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900793   47307 command_runner.go:130] >       "size": "95233506",
	I0827 22:56:55.900797   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900801   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900807   47307 command_runner.go:130] >       },
	I0827 22:56:55.900811   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900815   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900820   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900823   47307 command_runner.go:130] >     },
	I0827 22:56:55.900832   47307 command_runner.go:130] >     {
	I0827 22:56:55.900841   47307 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0827 22:56:55.900845   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900853   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0827 22:56:55.900856   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900860   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900881   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0827 22:56:55.900892   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0827 22:56:55.900895   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900900   47307 command_runner.go:130] >       "size": "89437512",
	I0827 22:56:55.900903   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900907   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900913   47307 command_runner.go:130] >       },
	I0827 22:56:55.900916   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900920   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900924   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900927   47307 command_runner.go:130] >     },
	I0827 22:56:55.900930   47307 command_runner.go:130] >     {
	I0827 22:56:55.900936   47307 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0827 22:56:55.900939   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900944   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0827 22:56:55.900947   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900950   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900959   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0827 22:56:55.900966   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0827 22:56:55.900969   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900973   47307 command_runner.go:130] >       "size": "92728217",
	I0827 22:56:55.900977   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900980   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900984   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900987   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900990   47307 command_runner.go:130] >     },
	I0827 22:56:55.900993   47307 command_runner.go:130] >     {
	I0827 22:56:55.900999   47307 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0827 22:56:55.901002   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.901006   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0827 22:56:55.901014   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901018   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.901025   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0827 22:56:55.901032   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0827 22:56:55.901035   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901039   47307 command_runner.go:130] >       "size": "68420936",
	I0827 22:56:55.901042   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.901046   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.901052   47307 command_runner.go:130] >       },
	I0827 22:56:55.901056   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.901059   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.901065   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.901068   47307 command_runner.go:130] >     },
	I0827 22:56:55.901071   47307 command_runner.go:130] >     {
	I0827 22:56:55.901077   47307 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0827 22:56:55.901083   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.901088   47307 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0827 22:56:55.901091   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901094   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.901101   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0827 22:56:55.901110   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0827 22:56:55.901113   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901117   47307 command_runner.go:130] >       "size": "742080",
	I0827 22:56:55.901121   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.901124   47307 command_runner.go:130] >         "value": "65535"
	I0827 22:56:55.901128   47307 command_runner.go:130] >       },
	I0827 22:56:55.901132   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.901135   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.901139   47307 command_runner.go:130] >       "pinned": true
	I0827 22:56:55.901142   47307 command_runner.go:130] >     }
	I0827 22:56:55.901146   47307 command_runner.go:130] >   ]
	I0827 22:56:55.901151   47307 command_runner.go:130] > }
	I0827 22:56:55.901330   47307 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:56:55.901341   47307 crio.go:433] Images already preloaded, skipping extraction
	I0827 22:56:55.901388   47307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:56:55.935460   47307 command_runner.go:130] > {
	I0827 22:56:55.935478   47307 command_runner.go:130] >   "images": [
	I0827 22:56:55.935483   47307 command_runner.go:130] >     {
	I0827 22:56:55.935490   47307 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0827 22:56:55.935495   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935501   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0827 22:56:55.935505   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935508   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935517   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0827 22:56:55.935523   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0827 22:56:55.935527   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935531   47307 command_runner.go:130] >       "size": "87165492",
	I0827 22:56:55.935535   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935539   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935546   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935553   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935556   47307 command_runner.go:130] >     },
	I0827 22:56:55.935560   47307 command_runner.go:130] >     {
	I0827 22:56:55.935566   47307 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0827 22:56:55.935572   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935577   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0827 22:56:55.935583   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935587   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935596   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0827 22:56:55.935603   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0827 22:56:55.935607   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935615   47307 command_runner.go:130] >       "size": "87190579",
	I0827 22:56:55.935619   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935626   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935632   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935636   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935639   47307 command_runner.go:130] >     },
	I0827 22:56:55.935642   47307 command_runner.go:130] >     {
	I0827 22:56:55.935648   47307 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0827 22:56:55.935655   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935660   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0827 22:56:55.935671   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935675   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935682   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0827 22:56:55.935689   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0827 22:56:55.935695   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935700   47307 command_runner.go:130] >       "size": "1363676",
	I0827 22:56:55.935706   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935710   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935716   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935723   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935727   47307 command_runner.go:130] >     },
	I0827 22:56:55.935735   47307 command_runner.go:130] >     {
	I0827 22:56:55.935744   47307 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0827 22:56:55.935748   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935753   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0827 22:56:55.935759   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935763   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935770   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0827 22:56:55.935785   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0827 22:56:55.935791   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935795   47307 command_runner.go:130] >       "size": "31470524",
	I0827 22:56:55.935801   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935805   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935809   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935813   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935817   47307 command_runner.go:130] >     },
	I0827 22:56:55.935820   47307 command_runner.go:130] >     {
	I0827 22:56:55.935826   47307 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0827 22:56:55.935832   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935837   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0827 22:56:55.935841   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935845   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935851   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0827 22:56:55.935860   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0827 22:56:55.935864   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935868   47307 command_runner.go:130] >       "size": "61245718",
	I0827 22:56:55.935880   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935886   47307 command_runner.go:130] >       "username": "nonroot",
	I0827 22:56:55.935890   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935893   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935897   47307 command_runner.go:130] >     },
	I0827 22:56:55.935900   47307 command_runner.go:130] >     {
	I0827 22:56:55.935907   47307 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0827 22:56:55.935913   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935918   47307 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0827 22:56:55.935923   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935927   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935936   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0827 22:56:55.935943   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0827 22:56:55.935948   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935952   47307 command_runner.go:130] >       "size": "149009664",
	I0827 22:56:55.935955   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.935959   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.935965   47307 command_runner.go:130] >       },
	I0827 22:56:55.935971   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935975   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935979   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935982   47307 command_runner.go:130] >     },
	I0827 22:56:55.935986   47307 command_runner.go:130] >     {
	I0827 22:56:55.935991   47307 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0827 22:56:55.935998   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936002   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0827 22:56:55.936008   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936012   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936019   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0827 22:56:55.936028   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0827 22:56:55.936032   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936036   47307 command_runner.go:130] >       "size": "95233506",
	I0827 22:56:55.936039   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936043   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936047   47307 command_runner.go:130] >       },
	I0827 22:56:55.936050   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936058   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936065   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936068   47307 command_runner.go:130] >     },
	I0827 22:56:55.936072   47307 command_runner.go:130] >     {
	I0827 22:56:55.936077   47307 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0827 22:56:55.936081   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936087   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0827 22:56:55.936092   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936096   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936120   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0827 22:56:55.936129   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0827 22:56:55.936133   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936137   47307 command_runner.go:130] >       "size": "89437512",
	I0827 22:56:55.936143   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936147   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936150   47307 command_runner.go:130] >       },
	I0827 22:56:55.936154   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936158   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936161   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936165   47307 command_runner.go:130] >     },
	I0827 22:56:55.936168   47307 command_runner.go:130] >     {
	I0827 22:56:55.936174   47307 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0827 22:56:55.936180   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936185   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0827 22:56:55.936188   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936192   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936199   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0827 22:56:55.936210   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0827 22:56:55.936215   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936219   47307 command_runner.go:130] >       "size": "92728217",
	I0827 22:56:55.936223   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.936226   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936232   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936235   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936239   47307 command_runner.go:130] >     },
	I0827 22:56:55.936242   47307 command_runner.go:130] >     {
	I0827 22:56:55.936253   47307 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0827 22:56:55.936259   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936263   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0827 22:56:55.936267   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936271   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936278   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0827 22:56:55.936286   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0827 22:56:55.936290   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936294   47307 command_runner.go:130] >       "size": "68420936",
	I0827 22:56:55.936297   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936302   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936305   47307 command_runner.go:130] >       },
	I0827 22:56:55.936311   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936315   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936321   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936324   47307 command_runner.go:130] >     },
	I0827 22:56:55.936328   47307 command_runner.go:130] >     {
	I0827 22:56:55.936334   47307 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0827 22:56:55.936340   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936344   47307 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0827 22:56:55.936347   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936351   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936364   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0827 22:56:55.936372   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0827 22:56:55.936376   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936380   47307 command_runner.go:130] >       "size": "742080",
	I0827 22:56:55.936383   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936387   47307 command_runner.go:130] >         "value": "65535"
	I0827 22:56:55.936393   47307 command_runner.go:130] >       },
	I0827 22:56:55.936396   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936403   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936406   47307 command_runner.go:130] >       "pinned": true
	I0827 22:56:55.936410   47307 command_runner.go:130] >     }
	I0827 22:56:55.936414   47307 command_runner.go:130] >   ]
	I0827 22:56:55.936417   47307 command_runner.go:130] > }
	I0827 22:56:55.936549   47307 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:56:55.936561   47307 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:56:55.936568   47307 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.0 crio true true} ...
	I0827 22:56:55.936661   47307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-465478 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:56:55.936726   47307 ssh_runner.go:195] Run: crio config
	I0827 22:56:55.968536   47307 command_runner.go:130] ! time="2024-08-27 22:56:55.940728265Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0827 22:56:55.974701   47307 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0827 22:56:55.985062   47307 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0827 22:56:55.985082   47307 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0827 22:56:55.985089   47307 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0827 22:56:55.985092   47307 command_runner.go:130] > #
	I0827 22:56:55.985098   47307 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0827 22:56:55.985104   47307 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0827 22:56:55.985109   47307 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0827 22:56:55.985115   47307 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0827 22:56:55.985119   47307 command_runner.go:130] > # reload'.
	I0827 22:56:55.985125   47307 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0827 22:56:55.985133   47307 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0827 22:56:55.985138   47307 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0827 22:56:55.985146   47307 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0827 22:56:55.985149   47307 command_runner.go:130] > [crio]
	I0827 22:56:55.985157   47307 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0827 22:56:55.985162   47307 command_runner.go:130] > # containers images, in this directory.
	I0827 22:56:55.985172   47307 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0827 22:56:55.985184   47307 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0827 22:56:55.985194   47307 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0827 22:56:55.985206   47307 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0827 22:56:55.985213   47307 command_runner.go:130] > # imagestore = ""
	I0827 22:56:55.985219   47307 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0827 22:56:55.985227   47307 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0827 22:56:55.985233   47307 command_runner.go:130] > storage_driver = "overlay"
	I0827 22:56:55.985239   47307 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0827 22:56:55.985246   47307 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0827 22:56:55.985253   47307 command_runner.go:130] > storage_option = [
	I0827 22:56:55.985260   47307 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0827 22:56:55.985263   47307 command_runner.go:130] > ]
	I0827 22:56:55.985273   47307 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0827 22:56:55.985281   47307 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0827 22:56:55.985286   47307 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0827 22:56:55.985294   47307 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0827 22:56:55.985299   47307 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0827 22:56:55.985306   47307 command_runner.go:130] > # always happen on a node reboot
	I0827 22:56:55.985311   47307 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0827 22:56:55.985322   47307 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0827 22:56:55.985330   47307 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0827 22:56:55.985335   47307 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0827 22:56:55.985341   47307 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0827 22:56:55.985348   47307 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0827 22:56:55.985358   47307 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0827 22:56:55.985362   47307 command_runner.go:130] > # internal_wipe = true
	I0827 22:56:55.985369   47307 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0827 22:56:55.985376   47307 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0827 22:56:55.985381   47307 command_runner.go:130] > # internal_repair = false
	I0827 22:56:55.985388   47307 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0827 22:56:55.985394   47307 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0827 22:56:55.985400   47307 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0827 22:56:55.985405   47307 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0827 22:56:55.985413   47307 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0827 22:56:55.985416   47307 command_runner.go:130] > [crio.api]
	I0827 22:56:55.985423   47307 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0827 22:56:55.985427   47307 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0827 22:56:55.985434   47307 command_runner.go:130] > # IP address on which the stream server will listen.
	I0827 22:56:55.985438   47307 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0827 22:56:55.985444   47307 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0827 22:56:55.985449   47307 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0827 22:56:55.985454   47307 command_runner.go:130] > # stream_port = "0"
	I0827 22:56:55.985459   47307 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0827 22:56:55.985463   47307 command_runner.go:130] > # stream_enable_tls = false
	I0827 22:56:55.985469   47307 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0827 22:56:55.985474   47307 command_runner.go:130] > # stream_idle_timeout = ""
	I0827 22:56:55.985484   47307 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0827 22:56:55.985492   47307 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0827 22:56:55.985499   47307 command_runner.go:130] > # minutes.
	I0827 22:56:55.985506   47307 command_runner.go:130] > # stream_tls_cert = ""
	I0827 22:56:55.985512   47307 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0827 22:56:55.985520   47307 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0827 22:56:55.985524   47307 command_runner.go:130] > # stream_tls_key = ""
	I0827 22:56:55.985529   47307 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0827 22:56:55.985537   47307 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0827 22:56:55.985556   47307 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0827 22:56:55.985563   47307 command_runner.go:130] > # stream_tls_ca = ""
	I0827 22:56:55.985570   47307 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0827 22:56:55.985576   47307 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0827 22:56:55.985583   47307 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0827 22:56:55.985589   47307 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0827 22:56:55.985595   47307 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0827 22:56:55.985601   47307 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0827 22:56:55.985605   47307 command_runner.go:130] > [crio.runtime]
	I0827 22:56:55.985616   47307 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0827 22:56:55.985624   47307 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0827 22:56:55.985627   47307 command_runner.go:130] > # "nofile=1024:2048"
	I0827 22:56:55.985633   47307 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0827 22:56:55.985639   47307 command_runner.go:130] > # default_ulimits = [
	I0827 22:56:55.985642   47307 command_runner.go:130] > # ]
	I0827 22:56:55.985647   47307 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0827 22:56:55.985653   47307 command_runner.go:130] > # no_pivot = false
	I0827 22:56:55.985658   47307 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0827 22:56:55.985666   47307 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0827 22:56:55.985671   47307 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0827 22:56:55.985676   47307 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0827 22:56:55.985683   47307 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0827 22:56:55.985689   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0827 22:56:55.985696   47307 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0827 22:56:55.985700   47307 command_runner.go:130] > # Cgroup setting for conmon
	I0827 22:56:55.985708   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0827 22:56:55.985712   47307 command_runner.go:130] > conmon_cgroup = "pod"
	I0827 22:56:55.985718   47307 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0827 22:56:55.985724   47307 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0827 22:56:55.985736   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0827 22:56:55.985742   47307 command_runner.go:130] > conmon_env = [
	I0827 22:56:55.985747   47307 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0827 22:56:55.985753   47307 command_runner.go:130] > ]
	I0827 22:56:55.985757   47307 command_runner.go:130] > # Additional environment variables to set for all the
	I0827 22:56:55.985766   47307 command_runner.go:130] > # containers. These are overridden if set in the
	I0827 22:56:55.985771   47307 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0827 22:56:55.985778   47307 command_runner.go:130] > # default_env = [
	I0827 22:56:55.985781   47307 command_runner.go:130] > # ]
	I0827 22:56:55.985786   47307 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0827 22:56:55.985795   47307 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0827 22:56:55.985800   47307 command_runner.go:130] > # selinux = false
	I0827 22:56:55.985806   47307 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0827 22:56:55.985814   47307 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0827 22:56:55.985819   47307 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0827 22:56:55.985825   47307 command_runner.go:130] > # seccomp_profile = ""
	I0827 22:56:55.985831   47307 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0827 22:56:55.985837   47307 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0827 22:56:55.985842   47307 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0827 22:56:55.985847   47307 command_runner.go:130] > # which might increase security.
	I0827 22:56:55.985851   47307 command_runner.go:130] > # This option is currently deprecated,
	I0827 22:56:55.985858   47307 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0827 22:56:55.985863   47307 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0827 22:56:55.985869   47307 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0827 22:56:55.985879   47307 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0827 22:56:55.985887   47307 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0827 22:56:55.985893   47307 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0827 22:56:55.985900   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.985904   47307 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0827 22:56:55.985912   47307 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0827 22:56:55.985916   47307 command_runner.go:130] > # the cgroup blockio controller.
	I0827 22:56:55.985922   47307 command_runner.go:130] > # blockio_config_file = ""
	I0827 22:56:55.985928   47307 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0827 22:56:55.985934   47307 command_runner.go:130] > # blockio parameters.
	I0827 22:56:55.985938   47307 command_runner.go:130] > # blockio_reload = false
	I0827 22:56:55.985944   47307 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0827 22:56:55.985953   47307 command_runner.go:130] > # irqbalance daemon.
	I0827 22:56:55.985958   47307 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0827 22:56:55.985968   47307 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0827 22:56:55.985975   47307 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0827 22:56:55.985983   47307 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0827 22:56:55.985989   47307 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0827 22:56:55.985997   47307 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0827 22:56:55.986002   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.986006   47307 command_runner.go:130] > # rdt_config_file = ""
	I0827 22:56:55.986010   47307 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0827 22:56:55.986017   47307 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0827 22:56:55.986044   47307 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0827 22:56:55.986051   47307 command_runner.go:130] > # separate_pull_cgroup = ""
	I0827 22:56:55.986057   47307 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0827 22:56:55.986062   47307 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0827 22:56:55.986065   47307 command_runner.go:130] > # will be added.
	I0827 22:56:55.986069   47307 command_runner.go:130] > # default_capabilities = [
	I0827 22:56:55.986073   47307 command_runner.go:130] > # 	"CHOWN",
	I0827 22:56:55.986076   47307 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0827 22:56:55.986080   47307 command_runner.go:130] > # 	"FSETID",
	I0827 22:56:55.986084   47307 command_runner.go:130] > # 	"FOWNER",
	I0827 22:56:55.986087   47307 command_runner.go:130] > # 	"SETGID",
	I0827 22:56:55.986090   47307 command_runner.go:130] > # 	"SETUID",
	I0827 22:56:55.986094   47307 command_runner.go:130] > # 	"SETPCAP",
	I0827 22:56:55.986097   47307 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0827 22:56:55.986101   47307 command_runner.go:130] > # 	"KILL",
	I0827 22:56:55.986104   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986111   47307 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0827 22:56:55.986119   47307 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0827 22:56:55.986124   47307 command_runner.go:130] > # add_inheritable_capabilities = false
	I0827 22:56:55.986131   47307 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0827 22:56:55.986136   47307 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0827 22:56:55.986142   47307 command_runner.go:130] > default_sysctls = [
	I0827 22:56:55.986147   47307 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0827 22:56:55.986150   47307 command_runner.go:130] > ]
	I0827 22:56:55.986154   47307 command_runner.go:130] > # List of devices on the host that a
	I0827 22:56:55.986165   47307 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0827 22:56:55.986171   47307 command_runner.go:130] > # allowed_devices = [
	I0827 22:56:55.986174   47307 command_runner.go:130] > # 	"/dev/fuse",
	I0827 22:56:55.986178   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986182   47307 command_runner.go:130] > # List of additional devices. specified as
	I0827 22:56:55.986191   47307 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0827 22:56:55.986195   47307 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0827 22:56:55.986205   47307 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0827 22:56:55.986214   47307 command_runner.go:130] > # additional_devices = [
	I0827 22:56:55.986217   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986222   47307 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0827 22:56:55.986228   47307 command_runner.go:130] > # cdi_spec_dirs = [
	I0827 22:56:55.986231   47307 command_runner.go:130] > # 	"/etc/cdi",
	I0827 22:56:55.986235   47307 command_runner.go:130] > # 	"/var/run/cdi",
	I0827 22:56:55.986241   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986246   47307 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0827 22:56:55.986254   47307 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0827 22:56:55.986258   47307 command_runner.go:130] > # Defaults to false.
	I0827 22:56:55.986262   47307 command_runner.go:130] > # device_ownership_from_security_context = false
	I0827 22:56:55.986269   47307 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0827 22:56:55.986274   47307 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0827 22:56:55.986278   47307 command_runner.go:130] > # hooks_dir = [
	I0827 22:56:55.986283   47307 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0827 22:56:55.986288   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986294   47307 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0827 22:56:55.986302   47307 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0827 22:56:55.986306   47307 command_runner.go:130] > # its default mounts from the following two files:
	I0827 22:56:55.986309   47307 command_runner.go:130] > #
	I0827 22:56:55.986315   47307 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0827 22:56:55.986323   47307 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0827 22:56:55.986328   47307 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0827 22:56:55.986333   47307 command_runner.go:130] > #
	I0827 22:56:55.986338   47307 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0827 22:56:55.986345   47307 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0827 22:56:55.986351   47307 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0827 22:56:55.986358   47307 command_runner.go:130] > #      only add mounts it finds in this file.
	I0827 22:56:55.986365   47307 command_runner.go:130] > #
	I0827 22:56:55.986371   47307 command_runner.go:130] > # default_mounts_file = ""
	I0827 22:56:55.986376   47307 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0827 22:56:55.986385   47307 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0827 22:56:55.986388   47307 command_runner.go:130] > pids_limit = 1024
	I0827 22:56:55.986394   47307 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0827 22:56:55.986400   47307 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0827 22:56:55.986406   47307 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0827 22:56:55.986416   47307 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0827 22:56:55.986420   47307 command_runner.go:130] > # log_size_max = -1
	I0827 22:56:55.986426   47307 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0827 22:56:55.986434   47307 command_runner.go:130] > # log_to_journald = false
	I0827 22:56:55.986440   47307 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0827 22:56:55.986444   47307 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0827 22:56:55.986451   47307 command_runner.go:130] > # Path to directory for container attach sockets.
	I0827 22:56:55.986456   47307 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0827 22:56:55.986463   47307 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0827 22:56:55.986466   47307 command_runner.go:130] > # bind_mount_prefix = ""
	I0827 22:56:55.986472   47307 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0827 22:56:55.986475   47307 command_runner.go:130] > # read_only = false
	I0827 22:56:55.986481   47307 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0827 22:56:55.986489   47307 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0827 22:56:55.986493   47307 command_runner.go:130] > # live configuration reload.
	I0827 22:56:55.986498   47307 command_runner.go:130] > # log_level = "info"
	I0827 22:56:55.986503   47307 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0827 22:56:55.986510   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.986513   47307 command_runner.go:130] > # log_filter = ""
	I0827 22:56:55.986521   47307 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0827 22:56:55.986527   47307 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0827 22:56:55.986532   47307 command_runner.go:130] > # separated by comma.
	I0827 22:56:55.986541   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986547   47307 command_runner.go:130] > # uid_mappings = ""
	I0827 22:56:55.986553   47307 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0827 22:56:55.986561   47307 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0827 22:56:55.986565   47307 command_runner.go:130] > # separated by comma.
	I0827 22:56:55.986573   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986582   47307 command_runner.go:130] > # gid_mappings = ""
	I0827 22:56:55.986590   47307 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0827 22:56:55.986595   47307 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0827 22:56:55.986603   47307 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0827 22:56:55.986613   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986619   47307 command_runner.go:130] > # minimum_mappable_uid = -1
	I0827 22:56:55.986624   47307 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0827 22:56:55.986632   47307 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0827 22:56:55.986638   47307 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0827 22:56:55.986646   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986654   47307 command_runner.go:130] > # minimum_mappable_gid = -1
	I0827 22:56:55.986660   47307 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0827 22:56:55.986668   47307 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0827 22:56:55.986673   47307 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0827 22:56:55.986679   47307 command_runner.go:130] > # ctr_stop_timeout = 30
	I0827 22:56:55.986684   47307 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0827 22:56:55.986693   47307 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0827 22:56:55.986697   47307 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0827 22:56:55.986704   47307 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0827 22:56:55.986708   47307 command_runner.go:130] > drop_infra_ctr = false
	I0827 22:56:55.986713   47307 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0827 22:56:55.986719   47307 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0827 22:56:55.986726   47307 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0827 22:56:55.986736   47307 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0827 22:56:55.986745   47307 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0827 22:56:55.986755   47307 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0827 22:56:55.986762   47307 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0827 22:56:55.986768   47307 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0827 22:56:55.986772   47307 command_runner.go:130] > # shared_cpuset = ""
	I0827 22:56:55.986777   47307 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0827 22:56:55.986784   47307 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0827 22:56:55.986788   47307 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0827 22:56:55.986797   47307 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0827 22:56:55.986800   47307 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0827 22:56:55.986806   47307 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0827 22:56:55.986814   47307 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0827 22:56:55.986822   47307 command_runner.go:130] > # enable_criu_support = false
	I0827 22:56:55.986829   47307 command_runner.go:130] > # Enable/disable the generation of the container,
	I0827 22:56:55.986835   47307 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0827 22:56:55.986841   47307 command_runner.go:130] > # enable_pod_events = false
	I0827 22:56:55.986846   47307 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0827 22:56:55.986854   47307 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0827 22:56:55.986859   47307 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0827 22:56:55.986863   47307 command_runner.go:130] > # default_runtime = "runc"
	I0827 22:56:55.986870   47307 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0827 22:56:55.986877   47307 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0827 22:56:55.986887   47307 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0827 22:56:55.986896   47307 command_runner.go:130] > # creation as a file is not desired either.
	I0827 22:56:55.986903   47307 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0827 22:56:55.986911   47307 command_runner.go:130] > # the hostname is being managed dynamically.
	I0827 22:56:55.986915   47307 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0827 22:56:55.986921   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986928   47307 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0827 22:56:55.986937   47307 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0827 22:56:55.986943   47307 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0827 22:56:55.986950   47307 command_runner.go:130] > # Each entry in the table should follow the format:
	I0827 22:56:55.986953   47307 command_runner.go:130] > #
	I0827 22:56:55.986959   47307 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0827 22:56:55.986964   47307 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0827 22:56:55.987007   47307 command_runner.go:130] > # runtime_type = "oci"
	I0827 22:56:55.987014   47307 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0827 22:56:55.987018   47307 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0827 22:56:55.987025   47307 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0827 22:56:55.987029   47307 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0827 22:56:55.987033   47307 command_runner.go:130] > # monitor_env = []
	I0827 22:56:55.987037   47307 command_runner.go:130] > # privileged_without_host_devices = false
	I0827 22:56:55.987041   47307 command_runner.go:130] > # allowed_annotations = []
	I0827 22:56:55.987047   47307 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0827 22:56:55.987050   47307 command_runner.go:130] > # Where:
	I0827 22:56:55.987055   47307 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0827 22:56:55.987063   47307 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0827 22:56:55.987069   47307 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0827 22:56:55.987089   47307 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0827 22:56:55.987095   47307 command_runner.go:130] > #   in $PATH.
	I0827 22:56:55.987101   47307 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0827 22:56:55.987105   47307 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0827 22:56:55.987113   47307 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0827 22:56:55.987117   47307 command_runner.go:130] > #   state.
	I0827 22:56:55.987123   47307 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0827 22:56:55.987128   47307 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0827 22:56:55.987134   47307 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0827 22:56:55.987141   47307 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0827 22:56:55.987147   47307 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0827 22:56:55.987155   47307 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0827 22:56:55.987162   47307 command_runner.go:130] > #   The currently recognized values are:
	I0827 22:56:55.987170   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0827 22:56:55.987177   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0827 22:56:55.987185   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0827 22:56:55.987190   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0827 22:56:55.987197   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0827 22:56:55.987205   47307 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0827 22:56:55.987211   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0827 22:56:55.987219   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0827 22:56:55.987225   47307 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0827 22:56:55.987233   47307 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0827 22:56:55.987237   47307 command_runner.go:130] > #   deprecated option "conmon".
	I0827 22:56:55.987245   47307 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0827 22:56:55.987250   47307 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0827 22:56:55.987260   47307 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0827 22:56:55.987264   47307 command_runner.go:130] > #   should be moved to the container's cgroup
	I0827 22:56:55.987270   47307 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0827 22:56:55.987276   47307 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0827 22:56:55.987281   47307 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0827 22:56:55.987286   47307 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0827 22:56:55.987291   47307 command_runner.go:130] > #
	I0827 22:56:55.987300   47307 command_runner.go:130] > # Using the seccomp notifier feature:
	I0827 22:56:55.987303   47307 command_runner.go:130] > #
	I0827 22:56:55.987308   47307 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0827 22:56:55.987317   47307 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0827 22:56:55.987320   47307 command_runner.go:130] > #
	I0827 22:56:55.987326   47307 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0827 22:56:55.987331   47307 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0827 22:56:55.987334   47307 command_runner.go:130] > #
	I0827 22:56:55.987340   47307 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0827 22:56:55.987343   47307 command_runner.go:130] > # feature.
	I0827 22:56:55.987345   47307 command_runner.go:130] > #
	I0827 22:56:55.987351   47307 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0827 22:56:55.987356   47307 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0827 22:56:55.987361   47307 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0827 22:56:55.987368   47307 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0827 22:56:55.987374   47307 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0827 22:56:55.987377   47307 command_runner.go:130] > #
	I0827 22:56:55.987382   47307 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0827 22:56:55.987387   47307 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0827 22:56:55.987389   47307 command_runner.go:130] > #
	I0827 22:56:55.987395   47307 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0827 22:56:55.987400   47307 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0827 22:56:55.987403   47307 command_runner.go:130] > #
	I0827 22:56:55.987409   47307 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0827 22:56:55.987414   47307 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0827 22:56:55.987417   47307 command_runner.go:130] > # limitation.
	I0827 22:56:55.987422   47307 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0827 22:56:55.987425   47307 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0827 22:56:55.987429   47307 command_runner.go:130] > runtime_type = "oci"
	I0827 22:56:55.987433   47307 command_runner.go:130] > runtime_root = "/run/runc"
	I0827 22:56:55.987437   47307 command_runner.go:130] > runtime_config_path = ""
	I0827 22:56:55.987444   47307 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0827 22:56:55.987448   47307 command_runner.go:130] > monitor_cgroup = "pod"
	I0827 22:56:55.987452   47307 command_runner.go:130] > monitor_exec_cgroup = ""
	I0827 22:56:55.987456   47307 command_runner.go:130] > monitor_env = [
	I0827 22:56:55.987461   47307 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0827 22:56:55.987466   47307 command_runner.go:130] > ]
	I0827 22:56:55.987470   47307 command_runner.go:130] > privileged_without_host_devices = false
	I0827 22:56:55.987478   47307 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0827 22:56:55.987487   47307 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0827 22:56:55.987495   47307 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0827 22:56:55.987502   47307 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0827 22:56:55.987509   47307 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0827 22:56:55.987516   47307 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0827 22:56:55.987524   47307 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0827 22:56:55.987534   47307 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0827 22:56:55.987539   47307 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0827 22:56:55.987546   47307 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0827 22:56:55.987549   47307 command_runner.go:130] > # Example:
	I0827 22:56:55.987553   47307 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0827 22:56:55.987557   47307 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0827 22:56:55.987564   47307 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0827 22:56:55.987568   47307 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0827 22:56:55.987572   47307 command_runner.go:130] > # cpuset = 0
	I0827 22:56:55.987575   47307 command_runner.go:130] > # cpushares = "0-1"
	I0827 22:56:55.987578   47307 command_runner.go:130] > # Where:
	I0827 22:56:55.987582   47307 command_runner.go:130] > # The workload name is workload-type.
	I0827 22:56:55.987588   47307 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0827 22:56:55.987597   47307 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0827 22:56:55.987602   47307 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0827 22:56:55.987612   47307 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0827 22:56:55.987617   47307 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0827 22:56:55.987621   47307 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0827 22:56:55.987627   47307 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0827 22:56:55.987631   47307 command_runner.go:130] > # Default value is set to true
	I0827 22:56:55.987635   47307 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0827 22:56:55.987640   47307 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0827 22:56:55.987644   47307 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0827 22:56:55.987648   47307 command_runner.go:130] > # Default value is set to 'false'
	I0827 22:56:55.987652   47307 command_runner.go:130] > # disable_hostport_mapping = false
	I0827 22:56:55.987658   47307 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0827 22:56:55.987662   47307 command_runner.go:130] > #
	I0827 22:56:55.987666   47307 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0827 22:56:55.987672   47307 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0827 22:56:55.987678   47307 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0827 22:56:55.987687   47307 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0827 22:56:55.987692   47307 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0827 22:56:55.987696   47307 command_runner.go:130] > [crio.image]
	I0827 22:56:55.987701   47307 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0827 22:56:55.987705   47307 command_runner.go:130] > # default_transport = "docker://"
	I0827 22:56:55.987710   47307 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0827 22:56:55.987716   47307 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0827 22:56:55.987721   47307 command_runner.go:130] > # global_auth_file = ""
	I0827 22:56:55.987726   47307 command_runner.go:130] > # The image used to instantiate infra containers.
	I0827 22:56:55.987730   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.987735   47307 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0827 22:56:55.987741   47307 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0827 22:56:55.987746   47307 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0827 22:56:55.987753   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.987759   47307 command_runner.go:130] > # pause_image_auth_file = ""
	I0827 22:56:55.987767   47307 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0827 22:56:55.987772   47307 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0827 22:56:55.987780   47307 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0827 22:56:55.987786   47307 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0827 22:56:55.987792   47307 command_runner.go:130] > # pause_command = "/pause"
	I0827 22:56:55.987799   47307 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0827 22:56:55.987807   47307 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0827 22:56:55.987812   47307 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0827 22:56:55.987821   47307 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0827 22:56:55.987826   47307 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0827 22:56:55.987834   47307 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0827 22:56:55.987840   47307 command_runner.go:130] > # pinned_images = [
	I0827 22:56:55.987843   47307 command_runner.go:130] > # ]
	I0827 22:56:55.987849   47307 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0827 22:56:55.987857   47307 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0827 22:56:55.987862   47307 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0827 22:56:55.987870   47307 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0827 22:56:55.987875   47307 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0827 22:56:55.987880   47307 command_runner.go:130] > # signature_policy = ""
	I0827 22:56:55.987885   47307 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0827 22:56:55.987893   47307 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0827 22:56:55.987903   47307 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0827 22:56:55.987912   47307 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0827 22:56:55.987918   47307 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0827 22:56:55.987922   47307 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0827 22:56:55.987928   47307 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0827 22:56:55.987936   47307 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0827 22:56:55.987940   47307 command_runner.go:130] > # changing them here.
	I0827 22:56:55.987946   47307 command_runner.go:130] > # insecure_registries = [
	I0827 22:56:55.987949   47307 command_runner.go:130] > # ]
	I0827 22:56:55.987955   47307 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0827 22:56:55.987962   47307 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0827 22:56:55.987966   47307 command_runner.go:130] > # image_volumes = "mkdir"
	I0827 22:56:55.987973   47307 command_runner.go:130] > # Temporary directory to use for storing big files
	I0827 22:56:55.987977   47307 command_runner.go:130] > # big_files_temporary_dir = ""
	I0827 22:56:55.987988   47307 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0827 22:56:55.987994   47307 command_runner.go:130] > # CNI plugins.
	I0827 22:56:55.987997   47307 command_runner.go:130] > [crio.network]
	I0827 22:56:55.988002   47307 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0827 22:56:55.988007   47307 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0827 22:56:55.988013   47307 command_runner.go:130] > # cni_default_network = ""
	I0827 22:56:55.988018   47307 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0827 22:56:55.988024   47307 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0827 22:56:55.988029   47307 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0827 22:56:55.988035   47307 command_runner.go:130] > # plugin_dirs = [
	I0827 22:56:55.988038   47307 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0827 22:56:55.988041   47307 command_runner.go:130] > # ]
	I0827 22:56:55.988047   47307 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0827 22:56:55.988052   47307 command_runner.go:130] > [crio.metrics]
	I0827 22:56:55.988057   47307 command_runner.go:130] > # Globally enable or disable metrics support.
	I0827 22:56:55.988063   47307 command_runner.go:130] > enable_metrics = true
	I0827 22:56:55.988067   47307 command_runner.go:130] > # Specify enabled metrics collectors.
	I0827 22:56:55.988071   47307 command_runner.go:130] > # Per default all metrics are enabled.
	I0827 22:56:55.988079   47307 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0827 22:56:55.988084   47307 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0827 22:56:55.988090   47307 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0827 22:56:55.988095   47307 command_runner.go:130] > # metrics_collectors = [
	I0827 22:56:55.988103   47307 command_runner.go:130] > # 	"operations",
	I0827 22:56:55.988109   47307 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0827 22:56:55.988113   47307 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0827 22:56:55.988120   47307 command_runner.go:130] > # 	"operations_errors",
	I0827 22:56:55.988124   47307 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0827 22:56:55.988128   47307 command_runner.go:130] > # 	"image_pulls_by_name",
	I0827 22:56:55.988132   47307 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0827 22:56:55.988139   47307 command_runner.go:130] > # 	"image_pulls_failures",
	I0827 22:56:55.988143   47307 command_runner.go:130] > # 	"image_pulls_successes",
	I0827 22:56:55.988149   47307 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0827 22:56:55.988153   47307 command_runner.go:130] > # 	"image_layer_reuse",
	I0827 22:56:55.988157   47307 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0827 22:56:55.988163   47307 command_runner.go:130] > # 	"containers_oom_total",
	I0827 22:56:55.988166   47307 command_runner.go:130] > # 	"containers_oom",
	I0827 22:56:55.988170   47307 command_runner.go:130] > # 	"processes_defunct",
	I0827 22:56:55.988174   47307 command_runner.go:130] > # 	"operations_total",
	I0827 22:56:55.988178   47307 command_runner.go:130] > # 	"operations_latency_seconds",
	I0827 22:56:55.988182   47307 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0827 22:56:55.988187   47307 command_runner.go:130] > # 	"operations_errors_total",
	I0827 22:56:55.988191   47307 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0827 22:56:55.988195   47307 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0827 22:56:55.988199   47307 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0827 22:56:55.988205   47307 command_runner.go:130] > # 	"image_pulls_success_total",
	I0827 22:56:55.988215   47307 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0827 22:56:55.988222   47307 command_runner.go:130] > # 	"containers_oom_count_total",
	I0827 22:56:55.988231   47307 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0827 22:56:55.988238   47307 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0827 22:56:55.988241   47307 command_runner.go:130] > # ]
	I0827 22:56:55.988245   47307 command_runner.go:130] > # The port on which the metrics server will listen.
	I0827 22:56:55.988251   47307 command_runner.go:130] > # metrics_port = 9090
	I0827 22:56:55.988256   47307 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0827 22:56:55.988260   47307 command_runner.go:130] > # metrics_socket = ""
	I0827 22:56:55.988265   47307 command_runner.go:130] > # The certificate for the secure metrics server.
	I0827 22:56:55.988273   47307 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0827 22:56:55.988279   47307 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0827 22:56:55.988285   47307 command_runner.go:130] > # certificate on any modification event.
	I0827 22:56:55.988294   47307 command_runner.go:130] > # metrics_cert = ""
	I0827 22:56:55.988301   47307 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0827 22:56:55.988306   47307 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0827 22:56:55.988312   47307 command_runner.go:130] > # metrics_key = ""
	I0827 22:56:55.988317   47307 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0827 22:56:55.988323   47307 command_runner.go:130] > [crio.tracing]
	I0827 22:56:55.988328   47307 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0827 22:56:55.988331   47307 command_runner.go:130] > # enable_tracing = false
	I0827 22:56:55.988339   47307 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0827 22:56:55.988343   47307 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0827 22:56:55.988352   47307 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0827 22:56:55.988358   47307 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0827 22:56:55.988362   47307 command_runner.go:130] > # CRI-O NRI configuration.
	I0827 22:56:55.988367   47307 command_runner.go:130] > [crio.nri]
	I0827 22:56:55.988371   47307 command_runner.go:130] > # Globally enable or disable NRI.
	I0827 22:56:55.988376   47307 command_runner.go:130] > # enable_nri = false
	I0827 22:56:55.988380   47307 command_runner.go:130] > # NRI socket to listen on.
	I0827 22:56:55.988386   47307 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0827 22:56:55.988390   47307 command_runner.go:130] > # NRI plugin directory to use.
	I0827 22:56:55.988397   47307 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0827 22:56:55.988401   47307 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0827 22:56:55.988407   47307 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0827 22:56:55.988412   47307 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0827 22:56:55.988417   47307 command_runner.go:130] > # nri_disable_connections = false
	I0827 22:56:55.988422   47307 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0827 22:56:55.988429   47307 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0827 22:56:55.988433   47307 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0827 22:56:55.988440   47307 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0827 22:56:55.988445   47307 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0827 22:56:55.988451   47307 command_runner.go:130] > [crio.stats]
	I0827 22:56:55.988458   47307 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0827 22:56:55.988476   47307 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0827 22:56:55.988482   47307 command_runner.go:130] > # stats_collection_period = 0
	I0827 22:56:55.988628   47307 cni.go:84] Creating CNI manager for ""
	I0827 22:56:55.988641   47307 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0827 22:56:55.988649   47307 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:56:55.988677   47307 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-465478 NodeName:multinode-465478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:56:55.988801   47307 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-465478"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:56:55.988858   47307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:56:55.998657   47307 command_runner.go:130] > kubeadm
	I0827 22:56:55.998678   47307 command_runner.go:130] > kubectl
	I0827 22:56:55.998682   47307 command_runner.go:130] > kubelet
	I0827 22:56:55.998705   47307 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:56:55.998770   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 22:56:56.007867   47307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0827 22:56:56.023603   47307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:56:56.038714   47307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0827 22:56:56.054288   47307 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0827 22:56:56.057680   47307 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0827 22:56:56.057751   47307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:56:56.198826   47307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:56:56.213630   47307 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478 for IP: 192.168.39.203
	I0827 22:56:56.213655   47307 certs.go:194] generating shared ca certs ...
	I0827 22:56:56.213670   47307 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:56:56.213840   47307 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:56:56.213884   47307 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:56:56.213894   47307 certs.go:256] generating profile certs ...
	I0827 22:56:56.213977   47307 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/client.key
	I0827 22:56:56.214029   47307 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key.be360bcd
	I0827 22:56:56.214066   47307 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key
	I0827 22:56:56.214076   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:56:56.214088   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:56:56.214100   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:56:56.214112   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:56:56.214128   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:56:56.214141   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:56:56.214153   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:56:56.214165   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:56:56.214214   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:56:56.214241   47307 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:56:56.214248   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:56:56.214266   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:56:56.214289   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:56:56.214315   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:56:56.214357   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:56:56.214399   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.214412   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.214424   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.215090   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:56:56.237665   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:56:56.259556   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:56:56.280825   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:56:56.302229   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0827 22:56:56.324167   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 22:56:56.345744   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:56:56.366862   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 22:56:56.388515   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:56:56.409320   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:56:56.430632   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:56:56.452026   47307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:56:56.466803   47307 ssh_runner.go:195] Run: openssl version
	I0827 22:56:56.472084   47307 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0827 22:56:56.472236   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:56:56.482210   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486231   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486257   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486292   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.491349   47307 command_runner.go:130] > 3ec20f2e
	I0827 22:56:56.491410   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:56:56.500021   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:56:56.509768   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.513640   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.513957   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.514007   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.519169   47307 command_runner.go:130] > b5213941
	I0827 22:56:56.519235   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:56:56.528679   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:56:56.539040   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543209   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543240   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543276   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.548579   47307 command_runner.go:130] > 51391683
	I0827 22:56:56.548648   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:56:56.558263   47307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:56:56.562314   47307 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:56:56.562343   47307 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0827 22:56:56.562352   47307 command_runner.go:130] > Device: 253,1	Inode: 6291478     Links: 1
	I0827 22:56:56.562360   47307 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0827 22:56:56.562376   47307 command_runner.go:130] > Access: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562390   47307 command_runner.go:130] > Modify: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562400   47307 command_runner.go:130] > Change: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562408   47307 command_runner.go:130] >  Birth: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562458   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 22:56:56.567556   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.567662   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 22:56:56.572773   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.572907   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 22:56:56.578092   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.578229   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 22:56:56.583433   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.583485   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 22:56:56.588365   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.588586   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 22:56:56.593660   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.593723   47307 kubeadm.go:392] StartCluster: {Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:56:56.593829   47307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:56:56.593898   47307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:56:56.627391   47307 command_runner.go:130] > 015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908
	I0827 22:56:56.627415   47307 command_runner.go:130] > ef8842da2a1926e369837bbfae1b7e10bb02da45e379e84d93b0cbe06f7e7855
	I0827 22:56:56.627421   47307 command_runner.go:130] > d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544
	I0827 22:56:56.627427   47307 command_runner.go:130] > 7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519
	I0827 22:56:56.627432   47307 command_runner.go:130] > 827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499
	I0827 22:56:56.627437   47307 command_runner.go:130] > 2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6
	I0827 22:56:56.627445   47307 command_runner.go:130] > ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec
	I0827 22:56:56.627453   47307 command_runner.go:130] > 30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1
	I0827 22:56:56.627490   47307 command_runner.go:130] > 855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3
	I0827 22:56:56.628871   47307 cri.go:89] found id: "015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908"
	I0827 22:56:56.628892   47307 cri.go:89] found id: "ef8842da2a1926e369837bbfae1b7e10bb02da45e379e84d93b0cbe06f7e7855"
	I0827 22:56:56.628898   47307 cri.go:89] found id: "d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544"
	I0827 22:56:56.628902   47307 cri.go:89] found id: "7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519"
	I0827 22:56:56.628907   47307 cri.go:89] found id: "827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499"
	I0827 22:56:56.628914   47307 cri.go:89] found id: "2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6"
	I0827 22:56:56.628921   47307 cri.go:89] found id: "ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec"
	I0827 22:56:56.628926   47307 cri.go:89] found id: "30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1"
	I0827 22:56:56.628933   47307 cri.go:89] found id: "855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3"
	I0827 22:56:56.628941   47307 cri.go:89] found id: ""
	I0827 22:56:56.628987   47307 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.527037281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799522527012093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33884a23-9560-4908-a85a-49367162375a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.527606397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17712188-96fd-4bc9-9a6e-1df61b026cc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.527668193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17712188-96fd-4bc9-9a6e-1df61b026cc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.528023377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17712188-96fd-4bc9-9a6e-1df61b026cc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.573347399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16e44816-37cf-4ba7-af5a-ec7945d4668b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.573430916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16e44816-37cf-4ba7-af5a-ec7945d4668b name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.574709299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebe0c061-fc07-467f-b76c-bb7ba9ee4372 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.575131945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799522575108838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebe0c061-fc07-467f-b76c-bb7ba9ee4372 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.575617543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d17493-2e75-47a5-9e0b-a81853a53ff3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.575698944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d17493-2e75-47a5-9e0b-a81853a53ff3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.576037481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d17493-2e75-47a5-9e0b-a81853a53ff3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.614592147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47fdca10-83f5-4b63-8383-d6914e695982 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.614664205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47fdca10-83f5-4b63-8383-d6914e695982 name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.615591358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39518cc1-2b62-4f13-9c77-0442c122bf3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.616046226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799522616022538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39518cc1-2b62-4f13-9c77-0442c122bf3e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.616555096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5ee37ca-314f-484e-96cd-1cff551953a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.616604006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5ee37ca-314f-484e-96cd-1cff551953a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.616926132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5ee37ca-314f-484e-96cd-1cff551953a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.657158404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd933290-a698-44dc-9280-a9be563fdfba name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.657276514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd933290-a698-44dc-9280-a9be563fdfba name=/runtime.v1.RuntimeService/Version
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.658498533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a91848e2-5d3a-45e0-974a-2b1aedb13f6b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.658898692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799522658875798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a91848e2-5d3a-45e0-974a-2b1aedb13f6b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.659354547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a8ccff7-bcac-4e49-bf7e-2a01f72968bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.659407654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a8ccff7-bcac-4e49-bf7e-2a01f72968bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 22:58:42 multinode-465478 crio[2838]: time="2024-08-27 22:58:42.659789543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a8ccff7-bcac-4e49-bf7e-2a01f72968bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	03f13a933ea01       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9b275278e9ebf       busybox-7dff88458-j67n7
	f96a3e43f2516       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   50e97919d8e1c       kindnet-rljzm
	d230ca4fe79e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   17b1de931c58e       storage-provisioner
	32f1f5a9e23ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   06812a3f6bedc       coredns-6f6b679f8f-gj4hr
	de730b11023bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   cf93392a97b67       kube-proxy-dc2v7
	3e6dcde1fec1d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   25ed24d2b4d3a       kube-scheduler-multinode-465478
	0a1c64bb0ada0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   2dcac5c4436bf       kube-apiserver-multinode-465478
	1c5241296cf9c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   3d9d1aadd5640       etcd-multinode-465478
	d2654e5707701       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   e52dd627b3b14       kube-controller-manager-multinode-465478
	015744245af62       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   6ff886066d4e8       coredns-6f6b679f8f-gj4hr
	abacf11082d9b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   51d1305a97dcb       busybox-7dff88458-j67n7
	d1aeddd6a3284       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   c087196811cff       storage-provisioner
	7bd5a7a1c1c9f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   e1491b885f8db       kindnet-rljzm
	827bc3f7e5631       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   97c971208b462       kube-proxy-dc2v7
	2597b46782de6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   584d32decdff1       kube-scheduler-multinode-465478
	ab19f142adda1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   25aa276b34f1f       etcd-multinode-465478
	30e40e98f2f39       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   6055c78681fcc       kube-apiserver-multinode-465478
	855e93985a2f5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   8279eef7e32f9       kube-controller-manager-multinode-465478
	
	
	==> coredns [015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55072 - 10935 "HINFO IN 191575834805188837.5841183418608395440. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010368892s
	
	
	==> coredns [32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46661 - 35624 "HINFO IN 6064224368019421635.5774456436384908123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011420854s
	
	
	==> describe nodes <==
	Name:               multinode-465478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-465478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=multinode-465478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_50_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-465478
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-465478
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 801a03968a924d54a795372514743338
	  System UUID:                801a0396-8a92-4d54-a795-372514743338
	  Boot ID:                    13263c25-4cc2-45a2-97a8-b5c453fc8328
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j67n7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 coredns-6f6b679f8f-gj4hr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m24s
	  kube-system                 etcd-multinode-465478                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m29s
	  kube-system                 kindnet-rljzm                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m25s
	  kube-system                 kube-apiserver-multinode-465478             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-multinode-465478    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-dc2v7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-multinode-465478             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m23s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 8m30s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m29s                kubelet          Node multinode-465478 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m29s                kubelet          Node multinode-465478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m29s                kubelet          Node multinode-465478 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m26s                node-controller  Node multinode-465478 event: Registered Node multinode-465478 in Controller
	  Normal  NodeReady                8m9s                 kubelet          Node multinode-465478 status is now: NodeReady
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node multinode-465478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node multinode-465478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node multinode-465478 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-465478 event: Registered Node multinode-465478 in Controller
	
	
	Name:               multinode-465478-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-465478-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=multinode-465478
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:57:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-465478-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:57:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:57:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:57:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:58:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    multinode-465478-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ea9254f1e0b4048a297bfb38bbb05ec
	  System UUID:                2ea9254f-1e0b-4048-a297-bfb38bbb05ec
	  Boot ID:                    7573361a-1317-4c22-904d-e9ef094d8330
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-msj9p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-2gs8n              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m42s
	  kube-system                 kube-proxy-8nfs4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m37s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m42s)  kubelet     Node multinode-465478-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m42s)  kubelet     Node multinode-465478-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m42s)  kubelet     Node multinode-465478-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m23s                  kubelet     Node multinode-465478-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-465478-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-465478-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-465478-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-465478-m02 status is now: NodeReady
	
	
	Name:               multinode-465478-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-465478-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=multinode-465478
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_58_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:58:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-465478-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:58:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:58:39 +0000   Tue, 27 Aug 2024 22:58:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:58:39 +0000   Tue, 27 Aug 2024 22:58:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:58:39 +0000   Tue, 27 Aug 2024 22:58:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:58:39 +0000   Tue, 27 Aug 2024 22:58:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    multinode-465478-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6eecbf2a68154e338135c4a07eb201ed
	  System UUID:                6eecbf2a-6815-4e33-8135-c4a07eb201ed
	  Boot ID:                    b2f51cad-8319-4a80-8a7b-fda7ccfa6ca5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gmcnn       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m41s
	  kube-system                 kube-proxy-rpnjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m41s (x3 over 6m41s)  kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x3 over 6m41s)  kubelet          Node multinode-465478-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x3 over 6m41s)  kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m21s                  kubelet          Node multinode-465478-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-465478-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m34s                  kubelet          Node multinode-465478-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-465478-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-465478-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-465478-m03 event: Registered Node multinode-465478-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-465478-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053313] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.152499] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.128033] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.247480] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[Aug27 22:50] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +3.540174] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.066170] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.487873] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.087914] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.088664] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.138754] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.900950] kauditd_printk_skb: 60 callbacks suppressed
	[Aug27 22:51] kauditd_printk_skb: 14 callbacks suppressed
	[Aug27 22:56] systemd-fstab-generator[2656]: Ignoring "noauto" option for root device
	[  +0.153954] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.233284] systemd-fstab-generator[2748]: Ignoring "noauto" option for root device
	[  +0.201484] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.282227] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[ +10.230305] systemd-fstab-generator[2944]: Ignoring "noauto" option for root device
	[  +0.081698] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.782620] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	[Aug27 22:57] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.782217] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.793674] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +18.213697] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa] <==
	{"level":"info","ts":"2024-08-27T22:56:59.213666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:56:59.213755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:56:59.214100Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"28dd8e6bbca035f5","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-27T22:56:59.218572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.229818Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T22:56:59.230135Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T22:56:59.230171Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T22:56:59.230296Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:56:59.230316Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:56:59.764310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.770644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:56:59.771693Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.774511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-08-27T22:56:59.774812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:56:59.775456Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.776128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T22:56:59.789159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T22:56:59.789201Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T22:56:59.770600Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-465478 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	
	
	==> etcd [ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec] <==
	{"level":"info","ts":"2024-08-27T22:51:09.431067Z","caller":"traceutil/trace.go:171","msg":"trace[744458532] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"189.932497ms","start":"2024-08-27T22:51:09.241118Z","end":"2024-08-27T22:51:09.431050Z","steps":["trace[744458532] 'process raft request'  (duration: 62.45434ms)","trace[744458532] 'compare'  (duration: 126.761507ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-27T22:52:00.909387Z","caller":"traceutil/trace.go:171","msg":"trace[162212405] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"168.226835ms","start":"2024-08-27T22:52:00.741142Z","end":"2024-08-27T22:52:00.909369Z","steps":["trace[162212405] 'process raft request'  (duration: 168.052201ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:52:01.194035Z","caller":"traceutil/trace.go:171","msg":"trace[291470518] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"159.526943ms","start":"2024-08-27T22:52:01.034491Z","end":"2024-08-27T22:52:01.194018Z","steps":["trace[291470518] 'process raft request'  (duration: 98.655423ms)","trace[291470518] 'compare'  (duration: 60.784862ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.711547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.572947ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888173926873632674 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-x2d9q\" mod_revision:581 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-27T22:52:01.712421Z","caller":"traceutil/trace.go:171","msg":"trace[394810362] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"318.35492ms","start":"2024-08-27T22:52:01.394049Z","end":"2024-08-27T22:52:01.712404Z","steps":["trace[394810362] 'read index received'  (duration: 78.457368ms)","trace[394810362] 'applied index is now lower than readState.Index'  (duration: 239.895792ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.712548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.4776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-27T22:52:01.712580Z","caller":"traceutil/trace.go:171","msg":"trace[1193444802] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:582; }","duration":"318.526049ms","start":"2024-08-27T22:52:01.394043Z","end":"2024-08-27T22:52:01.712569Z","steps":["trace[1193444802] 'agreement among raft nodes before linearized reading'  (duration: 318.435662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-27T22:52:01.712677Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-27T22:52:01.393999Z","time spent":"318.653068ms","remote":"127.0.0.1:57196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":12,"response size":29,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true "}
	{"level":"info","ts":"2024-08-27T22:52:01.712909Z","caller":"traceutil/trace.go:171","msg":"trace[786201546] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"442.033471ms","start":"2024-08-27T22:52:01.270861Z","end":"2024-08-27T22:52:01.712895Z","steps":["trace[786201546] 'process raft request'  (duration: 201.684602ms)","trace[786201546] 'compare'  (duration: 238.465093ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.713011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-27T22:52:01.270844Z","time spent":"442.113957ms","remote":"127.0.0.1:57130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2346,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-x2d9q\" mod_revision:581 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" > >"}
	{"level":"info","ts":"2024-08-27T22:52:53.827398Z","caller":"traceutil/trace.go:171","msg":"trace[1783336745] transaction","detail":"{read_only:false; response_revision:704; number_of_response:1; }","duration":"204.591624ms","start":"2024-08-27T22:52:53.622782Z","end":"2024-08-27T22:52:53.827373Z","steps":["trace[1783336745] 'process raft request'  (duration: 204.430699ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:52:57.373395Z","caller":"traceutil/trace.go:171","msg":"trace[716131246] linearizableReadLoop","detail":"{readStateIndex:761; appliedIndex:760; }","duration":"149.498374ms","start":"2024-08-27T22:52:57.223878Z","end":"2024-08-27T22:52:57.373376Z","steps":["trace[716131246] 'read index received'  (duration: 149.284642ms)","trace[716131246] 'applied index is now lower than readState.Index'  (duration: 212.994µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-27T22:52:57.373579Z","caller":"traceutil/trace.go:171","msg":"trace[60494434] transaction","detail":"{read_only:false; response_revision:714; number_of_response:1; }","duration":"189.292014ms","start":"2024-08-27T22:52:57.184278Z","end":"2024-08-27T22:52:57.373570Z","steps":["trace[60494434] 'process raft request'  (duration: 188.926601ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-27T22:52:57.373717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.727573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-465478-m03\" ","response":"range_response_count:1 size:3111"}
	{"level":"info","ts":"2024-08-27T22:52:57.373781Z","caller":"traceutil/trace.go:171","msg":"trace[1429209250] range","detail":"{range_begin:/registry/minions/multinode-465478-m03; range_end:; response_count:1; response_revision:714; }","duration":"143.813615ms","start":"2024-08-27T22:52:57.229956Z","end":"2024-08-27T22:52:57.373770Z","steps":["trace[1429209250] 'agreement among raft nodes before linearized reading'  (duration: 143.646181ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:55:13.856126Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-27T22:55:13.856288Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-465478","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-08-27T22:55:13.856391Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.856479Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.933937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.933988Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T22:55:13.934061Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-08-27T22:55:13.936674Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:55:13.936832Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:55:13.936878Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-465478","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> kernel <==
	 22:58:43 up 9 min,  0 users,  load average: 0.04, 0.10, 0.07
	Linux multinode-465478 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519] <==
	I0827 22:54:32.972602       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:54:42.970836       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:54:42.971019       1 main.go:299] handling current node
	I0827 22:54:42.971067       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:54:42.971089       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:54:42.971298       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:54:42.971329       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:54:52.964015       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:54:52.964139       1 main.go:299] handling current node
	I0827 22:54:52.964181       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:54:52.964200       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:54:52.964402       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:54:52.964432       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:55:02.965422       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:55:02.965463       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:55:02.965615       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:55:02.965636       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:55:02.965690       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:55:02.965696       1 main.go:299] handling current node
	I0827 22:55:12.972844       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:55:12.972900       1 main.go:299] handling current node
	I0827 22:55:12.972928       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:55:12.972936       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:55:12.973149       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:55:12.973156       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065] <==
	I0827 22:57:53.761694       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:58:03.761056       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:58:03.761259       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:58:03.761445       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:58:03.761474       1 main.go:299] handling current node
	I0827 22:58:03.761497       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:58:03.761514       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:58:13.761332       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:58:13.761487       1 main.go:299] handling current node
	I0827 22:58:13.761527       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:58:13.761552       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:58:13.761750       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:58:13.761780       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:58:23.760639       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:58:23.760789       1 main.go:299] handling current node
	I0827 22:58:23.760828       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:58:23.760852       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:58:23.761104       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:58:23.761169       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.2.0/24] 
	I0827 22:58:33.760634       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:58:33.760683       1 main.go:299] handling current node
	I0827 22:58:33.760702       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:58:33.760710       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:58:33.760937       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:58:33.760967       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e] <==
	I0827 22:57:01.304655       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 22:57:01.304793       1 policy_source.go:224] refreshing policies
	I0827 22:57:01.305520       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 22:57:01.313791       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 22:57:01.314180       1 aggregator.go:171] initial CRD sync complete...
	I0827 22:57:01.314212       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 22:57:01.314370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 22:57:01.314390       1 cache.go:39] Caches are synced for autoregister controller
	I0827 22:57:01.314508       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0827 22:57:01.315068       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 22:57:01.315791       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 22:57:01.315819       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 22:57:01.318768       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 22:57:01.391594       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 22:57:01.403787       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 22:57:01.408508       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 22:57:01.418805       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 22:57:02.216462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0827 22:57:03.482752       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:57:03.632917       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:57:03.649372       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:57:03.723894       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 22:57:03.731917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 22:57:04.878639       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:57:05.040639       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1] <==
	I0827 22:50:12.167650       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:50:13.042663       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:50:13.055818       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0827 22:50:13.063771       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:50:17.771489       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0827 22:50:17.869871       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0827 22:51:27.096043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51634: use of closed network connection
	E0827 22:51:27.280742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51650: use of closed network connection
	E0827 22:51:27.617118       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51678: use of closed network connection
	E0827 22:51:27.779608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51682: use of closed network connection
	E0827 22:51:27.942405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51700: use of closed network connection
	E0827 22:51:28.208710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51728: use of closed network connection
	E0827 22:51:28.367660       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51740: use of closed network connection
	E0827 22:51:28.544750       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51754: use of closed network connection
	E0827 22:51:28.717446       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51778: use of closed network connection
	I0827 22:55:13.848587       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0827 22:55:13.862910       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869536       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869611       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869656       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869687       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869733       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869765       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869798       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.880957       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3] <==
	I0827 22:52:49.035463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:49.267719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:52:49.268547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.490155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:52:50.490320       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-465478-m03\" does not exist"
	I0827 22:52:50.513621       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-465478-m03" podCIDRs=["10.244.3.0/24"]
	I0827 22:52:50.513811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.513943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.756083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:51.075668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:52.004835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:00.873993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:08.687179       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:53:08.688327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:08.697925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:11.968174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:46.986493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:53:46.989814       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m03"
	I0827 22:53:47.016693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:53:47.057800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.478195ms"
	I0827 22:53:47.058836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.569µs"
	I0827 22:53:52.058799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:52.075322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:52.091619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:54:02.165556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	
	
	==> kube-controller-manager [d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05] <==
	I0827 22:58:01.746428       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:01.746669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:58:01.757849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:58:01.763414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.153µs"
	I0827 22:58:01.776105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.711µs"
	I0827 22:58:04.676542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:58:05.606908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.816765ms"
	I0827 22:58:05.607616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="106.546µs"
	I0827 22:58:13.237450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:58:19.254164       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:19.271469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:19.478335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:19.479265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:20.647347       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-465478-m03\" does not exist"
	I0827 22:58:20.648182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:20.665187       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-465478-m03" podCIDRs=["10.244.2.0/24"]
	I0827 22:58:20.665266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:20.665293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:20.980921       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:21.334376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:24.775118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:30.730448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:39.923619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:39.923867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:39.941769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	
	
	==> kube-proxy [827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:50:18.895872       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:50:18.921433       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0827 22:50:18.921656       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:50:18.951368       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:50:18.951452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:50:18.951492       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:50:18.953840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:50:18.954191       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:50:18.954280       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:50:18.955554       1 config.go:197] "Starting service config controller"
	I0827 22:50:18.955600       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:50:18.955634       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:50:18.955650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:50:18.956143       1 config.go:326] "Starting node config controller"
	I0827 22:50:18.957825       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:50:19.057396       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:50:19.057506       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:50:19.058831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:57:03.086629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:57:03.103688       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0827 22:57:03.104510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:57:03.177882       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:57:03.177926       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:57:03.177954       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:57:03.180981       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:57:03.181367       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:57:03.181617       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:57:03.184368       1 config.go:197] "Starting service config controller"
	I0827 22:57:03.184505       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:57:03.187734       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:57:03.187742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:57:03.188432       1 config.go:326] "Starting node config controller"
	I0827 22:57:03.188441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:57:03.288395       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:57:03.288581       1 shared_informer.go:320] Caches are synced for node config
	I0827 22:57:03.288192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6] <==
	E0827 22:50:10.234777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:10.234525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:10.234820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.078537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:11.078600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.079430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 22:50:11.079467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.158134       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:50:11.158626       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0827 22:50:11.189258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 22:50:11.189303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.210837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 22:50:11.210951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.283763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 22:50:11.284000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.288715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0827 22:50:11.288790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.352004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:50:11.352184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.371553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:11.371603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.482913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 22:50:11.483347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0827 22:50:14.214609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:55:13.850359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb] <==
	I0827 22:57:00.478586       1 serving.go:386] Generated self-signed cert in-memory
	W0827 22:57:01.254047       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 22:57:01.254142       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 22:57:01.254184       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 22:57:01.254211       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 22:57:01.331387       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 22:57:01.331412       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:57:01.336310       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 22:57:01.336527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 22:57:01.336566       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 22:57:01.336585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 22:57:01.436907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 22:57:08 multinode-465478 kubelet[3073]: E0827 22:57:08.249190    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799428248485397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:10 multinode-465478 kubelet[3073]: I0827 22:57:10.318865    3073 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 27 22:57:18 multinode-465478 kubelet[3073]: E0827 22:57:18.252685    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799438251366256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:18 multinode-465478 kubelet[3073]: E0827 22:57:18.252713    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799438251366256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:28 multinode-465478 kubelet[3073]: E0827 22:57:28.256728    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799448256002251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:28 multinode-465478 kubelet[3073]: E0827 22:57:28.257050    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799448256002251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:38 multinode-465478 kubelet[3073]: E0827 22:57:38.259478    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799458259052119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:38 multinode-465478 kubelet[3073]: E0827 22:57:38.259531    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799458259052119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:48 multinode-465478 kubelet[3073]: E0827 22:57:48.263539    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799468261454517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:48 multinode-465478 kubelet[3073]: E0827 22:57:48.263578    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799468261454517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:58 multinode-465478 kubelet[3073]: E0827 22:57:58.246090    3073 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:57:58 multinode-465478 kubelet[3073]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:57:58 multinode-465478 kubelet[3073]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:57:58 multinode-465478 kubelet[3073]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:57:58 multinode-465478 kubelet[3073]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:57:58 multinode-465478 kubelet[3073]: E0827 22:57:58.265053    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799478264359793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:57:58 multinode-465478 kubelet[3073]: E0827 22:57:58.265091    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799478264359793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:08 multinode-465478 kubelet[3073]: E0827 22:58:08.270802    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799488270381995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:08 multinode-465478 kubelet[3073]: E0827 22:58:08.270836    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799488270381995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:18 multinode-465478 kubelet[3073]: E0827 22:58:18.272667    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799498272431856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:18 multinode-465478 kubelet[3073]: E0827 22:58:18.272696    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799498272431856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:28 multinode-465478 kubelet[3073]: E0827 22:58:28.276390    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799508275821803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:28 multinode-465478 kubelet[3073]: E0827 22:58:28.276445    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799508275821803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:38 multinode-465478 kubelet[3073]: E0827 22:58:38.277831    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799518277611402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:58:38 multinode-465478 kubelet[3073]: E0827 22:58:38.277861    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799518277611402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 22:58:42.253515   48462 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19522-7571/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-465478 -n multinode-465478
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-465478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 stop
E0827 22:59:24.314791   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465478 stop: exit status 82 (2m0.456629745s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-465478-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-465478 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465478 status: exit status 3 (18.815282132s)

                                                
                                                
-- stdout --
	multinode-465478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-465478-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:01:05.380761   49130 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	E0827 23:01:05.380817   49130 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-465478 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-465478 -n multinode-465478
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-465478 logs -n 25: (1.470076245s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478:/home/docker/cp-test_multinode-465478-m02_multinode-465478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478 sudo cat                                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m02_multinode-465478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03:/home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478-m03 sudo cat                                   | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp testdata/cp-test.txt                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478:/home/docker/cp-test_multinode-465478-m03_multinode-465478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478 sudo cat                                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02:/home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478-m02 sudo cat                                   | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-465478 node stop m03                                                          | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	| node    | multinode-465478 node start                                                             | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:53 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| stop    | -p multinode-465478                                                                     | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| start   | -p multinode-465478                                                                     | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:55 UTC | 27 Aug 24 22:58 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC |                     |
	| node    | multinode-465478 node delete                                                            | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC | 27 Aug 24 22:58 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-465478 stop                                                                   | multinode-465478 | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:55:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:55:13.022109   47307 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:55:13.022223   47307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:55:13.022233   47307 out.go:358] Setting ErrFile to fd 2...
	I0827 22:55:13.022239   47307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:55:13.022420   47307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:55:13.022954   47307 out.go:352] Setting JSON to false
	I0827 22:55:13.023914   47307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5860,"bootTime":1724793453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:55:13.023971   47307 start.go:139] virtualization: kvm guest
	I0827 22:55:13.026108   47307 out.go:177] * [multinode-465478] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:55:13.027395   47307 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:55:13.027400   47307 notify.go:220] Checking for updates...
	I0827 22:55:13.030743   47307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:55:13.031982   47307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:55:13.033384   47307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:55:13.034902   47307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:55:13.036175   47307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:55:13.037864   47307 config.go:182] Loaded profile config "multinode-465478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:55:13.037963   47307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:55:13.038383   47307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:55:13.038430   47307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:55:13.062494   47307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0827 22:55:13.062937   47307 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:55:13.063569   47307 main.go:141] libmachine: Using API Version  1
	I0827 22:55:13.063598   47307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:55:13.063916   47307 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:55:13.064096   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.099184   47307 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 22:55:13.100530   47307 start.go:297] selected driver: kvm2
	I0827 22:55:13.100545   47307 start.go:901] validating driver "kvm2" against &{Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:55:13.100668   47307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:55:13.100962   47307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:55:13.101039   47307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 22:55:13.116211   47307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 22:55:13.116972   47307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:55:13.117009   47307 cni.go:84] Creating CNI manager for ""
	I0827 22:55:13.117015   47307 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0827 22:55:13.117081   47307 start.go:340] cluster config:
	{Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:55:13.117193   47307 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:55:13.119824   47307 out.go:177] * Starting "multinode-465478" primary control-plane node in "multinode-465478" cluster
	I0827 22:55:13.121319   47307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:55:13.121353   47307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 22:55:13.121363   47307 cache.go:56] Caching tarball of preloaded images
	I0827 22:55:13.121432   47307 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 22:55:13.121442   47307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 22:55:13.121560   47307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/config.json ...
	I0827 22:55:13.121777   47307 start.go:360] acquireMachinesLock for multinode-465478: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 22:55:13.121813   47307 start.go:364] duration metric: took 20.608µs to acquireMachinesLock for "multinode-465478"
	I0827 22:55:13.121828   47307 start.go:96] Skipping create...Using existing machine configuration
	I0827 22:55:13.121833   47307 fix.go:54] fixHost starting: 
	I0827 22:55:13.122077   47307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:55:13.122107   47307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:55:13.136366   47307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0827 22:55:13.136922   47307 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:55:13.137384   47307 main.go:141] libmachine: Using API Version  1
	I0827 22:55:13.137398   47307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:55:13.137793   47307 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:55:13.138028   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.138184   47307 main.go:141] libmachine: (multinode-465478) Calling .GetState
	I0827 22:55:13.139845   47307 fix.go:112] recreateIfNeeded on multinode-465478: state=Running err=<nil>
	W0827 22:55:13.139866   47307 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 22:55:13.141885   47307 out.go:177] * Updating the running kvm2 "multinode-465478" VM ...
	I0827 22:55:13.143234   47307 machine.go:93] provisionDockerMachine start ...
	I0827 22:55:13.143252   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:55:13.143454   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.145771   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.146208   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.146237   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.146352   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.146522   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.146642   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.146859   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.147025   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.147231   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.147245   47307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 22:55:13.257982   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-465478
	
	I0827 22:55:13.258036   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.258270   47307 buildroot.go:166] provisioning hostname "multinode-465478"
	I0827 22:55:13.258294   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.258504   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.261289   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.261671   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.261694   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.261851   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.262031   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.262196   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.262294   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.262421   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.262600   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.262616   47307 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-465478 && echo "multinode-465478" | sudo tee /etc/hostname
	I0827 22:55:13.383433   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-465478
	
	I0827 22:55:13.383469   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.386293   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.386701   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.386725   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.386955   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.387156   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.387342   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.387524   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.387701   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.387942   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.387966   47307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-465478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-465478/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-465478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:55:13.497064   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:55:13.497088   47307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 22:55:13.497114   47307 buildroot.go:174] setting up certificates
	I0827 22:55:13.497123   47307 provision.go:84] configureAuth start
	I0827 22:55:13.497131   47307 main.go:141] libmachine: (multinode-465478) Calling .GetMachineName
	I0827 22:55:13.497417   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:55:13.499930   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.500265   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.500288   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.500500   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.502526   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.502867   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.502899   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.503013   47307 provision.go:143] copyHostCerts
	I0827 22:55:13.503039   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:55:13.503071   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 22:55:13.503086   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 22:55:13.503154   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 22:55:13.503245   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:55:13.503269   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 22:55:13.503273   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 22:55:13.503297   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 22:55:13.503351   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:55:13.503366   47307 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 22:55:13.503373   47307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 22:55:13.503392   47307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 22:55:13.503442   47307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.multinode-465478 san=[127.0.0.1 192.168.39.203 localhost minikube multinode-465478]
	I0827 22:55:13.572714   47307 provision.go:177] copyRemoteCerts
	I0827 22:55:13.572780   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:55:13.572803   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.575491   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.575848   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.575875   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.576065   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.576231   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.576366   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.576549   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:55:13.658322   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0827 22:55:13.658392   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 22:55:13.682056   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0827 22:55:13.682125   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0827 22:55:13.705603   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0827 22:55:13.705685   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 22:55:13.728103   47307 provision.go:87] duration metric: took 230.968024ms to configureAuth
	I0827 22:55:13.728134   47307 buildroot.go:189] setting minikube options for container-runtime
	I0827 22:55:13.728348   47307 config.go:182] Loaded profile config "multinode-465478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:55:13.728437   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:55:13.730767   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.731119   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:55:13.731149   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:55:13.731279   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:55:13.731440   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.731634   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:55:13.731774   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:55:13.731977   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:55:13.732130   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:55:13.732145   47307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 22:56:44.362478   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 22:56:44.362507   47307 machine.go:96] duration metric: took 1m31.219260749s to provisionDockerMachine
	I0827 22:56:44.362520   47307 start.go:293] postStartSetup for "multinode-465478" (driver="kvm2")
	I0827 22:56:44.362530   47307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:56:44.362545   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.362876   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:56:44.362898   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.366284   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.366734   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.366754   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.366953   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.367126   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.367271   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.367387   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.451066   47307 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:56:44.454909   47307 command_runner.go:130] > NAME=Buildroot
	I0827 22:56:44.454932   47307 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0827 22:56:44.454937   47307 command_runner.go:130] > ID=buildroot
	I0827 22:56:44.454942   47307 command_runner.go:130] > VERSION_ID=2023.02.9
	I0827 22:56:44.454946   47307 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0827 22:56:44.455033   47307 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 22:56:44.455053   47307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 22:56:44.455119   47307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 22:56:44.455199   47307 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 22:56:44.455209   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /etc/ssl/certs/147652.pem
	I0827 22:56:44.455298   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 22:56:44.463978   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:56:44.490185   47307 start.go:296] duration metric: took 127.652878ms for postStartSetup
	I0827 22:56:44.490230   47307 fix.go:56] duration metric: took 1m31.368396123s for fixHost
	I0827 22:56:44.490251   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.493107   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.493583   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.493616   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.493762   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.493997   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.494132   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.494260   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.494408   47307 main.go:141] libmachine: Using SSH client type: native
	I0827 22:56:44.494587   47307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0827 22:56:44.494601   47307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 22:56:44.604942   47307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724799404.577277528
	
	I0827 22:56:44.604967   47307 fix.go:216] guest clock: 1724799404.577277528
	I0827 22:56:44.604976   47307 fix.go:229] Guest: 2024-08-27 22:56:44.577277528 +0000 UTC Remote: 2024-08-27 22:56:44.490235835 +0000 UTC m=+91.505291952 (delta=87.041693ms)
	I0827 22:56:44.605000   47307 fix.go:200] guest clock delta is within tolerance: 87.041693ms
	I0827 22:56:44.605006   47307 start.go:83] releasing machines lock for "multinode-465478", held for 1m31.48318308s
	I0827 22:56:44.605025   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.605267   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:56:44.607648   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.607984   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.608012   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.608167   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608656   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608798   47307 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:56:44.608942   47307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:56:44.608992   47307 ssh_runner.go:195] Run: cat /version.json
	I0827 22:56:44.608993   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.609006   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:56:44.611539   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611555   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611879   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.611897   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.611919   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:44.611936   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:44.612074   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.612181   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:56:44.612244   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.612330   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:56:44.612372   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.612430   47307 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:56:44.612495   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.612546   47307 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:56:44.715960   47307 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0827 22:56:44.716727   47307 command_runner.go:130] > {"iso_version": "v1.33.1-1724692311-19511", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "ab8c74129ca11fc20d41e21bf0a04c3a21513cf7"}
	I0827 22:56:44.716877   47307 ssh_runner.go:195] Run: systemctl --version
	I0827 22:56:44.722549   47307 command_runner.go:130] > systemd 252 (252)
	I0827 22:56:44.722582   47307 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0827 22:56:44.722642   47307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 22:56:44.877748   47307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 22:56:44.883265   47307 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0827 22:56:44.883308   47307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 22:56:44.883365   47307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:56:44.891868   47307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 22:56:44.891888   47307 start.go:495] detecting cgroup driver to use...
	I0827 22:56:44.891945   47307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 22:56:44.907441   47307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 22:56:44.921695   47307 docker.go:217] disabling cri-docker service (if available) ...
	I0827 22:56:44.921772   47307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 22:56:44.936211   47307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 22:56:44.949085   47307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 22:56:45.085594   47307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 22:56:45.265455   47307 docker.go:233] disabling docker service ...
	I0827 22:56:45.265512   47307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 22:56:45.309840   47307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 22:56:45.324708   47307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 22:56:45.515571   47307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 22:56:45.681509   47307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 22:56:45.695587   47307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:56:45.713404   47307 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0827 22:56:45.713440   47307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 22:56:45.713497   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.723378   47307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 22:56:45.723443   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.734027   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.745925   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.757035   47307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:56:45.768068   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.778920   47307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.789117   47307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 22:56:45.799783   47307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:56:45.809670   47307 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0827 22:56:45.809726   47307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:56:45.819664   47307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:56:45.964148   47307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 22:56:55.749546   47307 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.785362269s)
	I0827 22:56:55.749583   47307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 22:56:55.749630   47307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 22:56:55.754100   47307 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0827 22:56:55.754115   47307 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0827 22:56:55.754121   47307 command_runner.go:130] > Device: 0,22	Inode: 1402        Links: 1
	I0827 22:56:55.754128   47307 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0827 22:56:55.754133   47307 command_runner.go:130] > Access: 2024-08-27 22:56:55.609318101 +0000
	I0827 22:56:55.754139   47307 command_runner.go:130] > Modify: 2024-08-27 22:56:55.585316377 +0000
	I0827 22:56:55.754144   47307 command_runner.go:130] > Change: 2024-08-27 22:56:55.585316377 +0000
	I0827 22:56:55.754148   47307 command_runner.go:130] >  Birth: -
	I0827 22:56:55.754225   47307 start.go:563] Will wait 60s for crictl version
	I0827 22:56:55.754273   47307 ssh_runner.go:195] Run: which crictl
	I0827 22:56:55.757629   47307 command_runner.go:130] > /usr/bin/crictl
	I0827 22:56:55.757681   47307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:56:55.793073   47307 command_runner.go:130] > Version:  0.1.0
	I0827 22:56:55.793113   47307 command_runner.go:130] > RuntimeName:  cri-o
	I0827 22:56:55.793118   47307 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0827 22:56:55.793123   47307 command_runner.go:130] > RuntimeApiVersion:  v1
	I0827 22:56:55.794117   47307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 22:56:55.794196   47307 ssh_runner.go:195] Run: crio --version
	I0827 22:56:55.819761   47307 command_runner.go:130] > crio version 1.29.1
	I0827 22:56:55.819783   47307 command_runner.go:130] > Version:        1.29.1
	I0827 22:56:55.819789   47307 command_runner.go:130] > GitCommit:      unknown
	I0827 22:56:55.819793   47307 command_runner.go:130] > GitCommitDate:  unknown
	I0827 22:56:55.819797   47307 command_runner.go:130] > GitTreeState:   clean
	I0827 22:56:55.819802   47307 command_runner.go:130] > BuildDate:      2024-08-26T22:48:20Z
	I0827 22:56:55.819807   47307 command_runner.go:130] > GoVersion:      go1.21.6
	I0827 22:56:55.819811   47307 command_runner.go:130] > Compiler:       gc
	I0827 22:56:55.819815   47307 command_runner.go:130] > Platform:       linux/amd64
	I0827 22:56:55.819819   47307 command_runner.go:130] > Linkmode:       dynamic
	I0827 22:56:55.819823   47307 command_runner.go:130] > BuildTags:      
	I0827 22:56:55.819827   47307 command_runner.go:130] >   containers_image_ostree_stub
	I0827 22:56:55.819831   47307 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0827 22:56:55.819834   47307 command_runner.go:130] >   btrfs_noversion
	I0827 22:56:55.819840   47307 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0827 22:56:55.819844   47307 command_runner.go:130] >   libdm_no_deferred_remove
	I0827 22:56:55.819847   47307 command_runner.go:130] >   seccomp
	I0827 22:56:55.819851   47307 command_runner.go:130] > LDFlags:          unknown
	I0827 22:56:55.819855   47307 command_runner.go:130] > SeccompEnabled:   true
	I0827 22:56:55.819859   47307 command_runner.go:130] > AppArmorEnabled:  false
	I0827 22:56:55.820950   47307 ssh_runner.go:195] Run: crio --version
	I0827 22:56:55.847712   47307 command_runner.go:130] > crio version 1.29.1
	I0827 22:56:55.847736   47307 command_runner.go:130] > Version:        1.29.1
	I0827 22:56:55.847741   47307 command_runner.go:130] > GitCommit:      unknown
	I0827 22:56:55.847745   47307 command_runner.go:130] > GitCommitDate:  unknown
	I0827 22:56:55.847749   47307 command_runner.go:130] > GitTreeState:   clean
	I0827 22:56:55.847754   47307 command_runner.go:130] > BuildDate:      2024-08-26T22:48:20Z
	I0827 22:56:55.847767   47307 command_runner.go:130] > GoVersion:      go1.21.6
	I0827 22:56:55.847771   47307 command_runner.go:130] > Compiler:       gc
	I0827 22:56:55.847775   47307 command_runner.go:130] > Platform:       linux/amd64
	I0827 22:56:55.847778   47307 command_runner.go:130] > Linkmode:       dynamic
	I0827 22:56:55.847783   47307 command_runner.go:130] > BuildTags:      
	I0827 22:56:55.847787   47307 command_runner.go:130] >   containers_image_ostree_stub
	I0827 22:56:55.847792   47307 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0827 22:56:55.847795   47307 command_runner.go:130] >   btrfs_noversion
	I0827 22:56:55.847800   47307 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0827 22:56:55.847804   47307 command_runner.go:130] >   libdm_no_deferred_remove
	I0827 22:56:55.847807   47307 command_runner.go:130] >   seccomp
	I0827 22:56:55.847811   47307 command_runner.go:130] > LDFlags:          unknown
	I0827 22:56:55.847815   47307 command_runner.go:130] > SeccompEnabled:   true
	I0827 22:56:55.847819   47307 command_runner.go:130] > AppArmorEnabled:  false
	I0827 22:56:55.850808   47307 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 22:56:55.852206   47307 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:56:55.855086   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:55.855439   47307 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:56:55.855457   47307 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:56:55.855712   47307 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 22:56:55.859528   47307 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0827 22:56:55.859640   47307 kubeadm.go:883] updating cluster {Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:56:55.859820   47307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 22:56:55.859875   47307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:56:55.899917   47307 command_runner.go:130] > {
	I0827 22:56:55.899942   47307 command_runner.go:130] >   "images": [
	I0827 22:56:55.899948   47307 command_runner.go:130] >     {
	I0827 22:56:55.899968   47307 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0827 22:56:55.899975   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900006   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0827 22:56:55.900019   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900026   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900059   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0827 22:56:55.900071   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0827 22:56:55.900080   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900086   47307 command_runner.go:130] >       "size": "87165492",
	I0827 22:56:55.900092   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900102   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900108   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900114   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900120   47307 command_runner.go:130] >     },
	I0827 22:56:55.900128   47307 command_runner.go:130] >     {
	I0827 22:56:55.900138   47307 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0827 22:56:55.900147   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900156   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0827 22:56:55.900162   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900169   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900183   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0827 22:56:55.900197   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0827 22:56:55.900205   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900212   47307 command_runner.go:130] >       "size": "87190579",
	I0827 22:56:55.900220   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900235   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900243   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900250   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900257   47307 command_runner.go:130] >     },
	I0827 22:56:55.900263   47307 command_runner.go:130] >     {
	I0827 22:56:55.900273   47307 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0827 22:56:55.900282   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900289   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0827 22:56:55.900298   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900305   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900319   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0827 22:56:55.900335   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0827 22:56:55.900341   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900346   47307 command_runner.go:130] >       "size": "1363676",
	I0827 22:56:55.900352   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900356   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900360   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900366   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900369   47307 command_runner.go:130] >     },
	I0827 22:56:55.900372   47307 command_runner.go:130] >     {
	I0827 22:56:55.900379   47307 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0827 22:56:55.900384   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900391   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0827 22:56:55.900394   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900400   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900413   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0827 22:56:55.900432   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0827 22:56:55.900438   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900442   47307 command_runner.go:130] >       "size": "31470524",
	I0827 22:56:55.900446   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900452   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900455   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900460   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900481   47307 command_runner.go:130] >     },
	I0827 22:56:55.900489   47307 command_runner.go:130] >     {
	I0827 22:56:55.900498   47307 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0827 22:56:55.900508   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900516   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0827 22:56:55.900525   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900532   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900543   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0827 22:56:55.900552   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0827 22:56:55.900558   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900566   47307 command_runner.go:130] >       "size": "61245718",
	I0827 22:56:55.900572   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900576   47307 command_runner.go:130] >       "username": "nonroot",
	I0827 22:56:55.900580   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900590   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900595   47307 command_runner.go:130] >     },
	I0827 22:56:55.900599   47307 command_runner.go:130] >     {
	I0827 22:56:55.900604   47307 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0827 22:56:55.900608   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900615   47307 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0827 22:56:55.900619   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900625   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900632   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0827 22:56:55.900641   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0827 22:56:55.900653   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900659   47307 command_runner.go:130] >       "size": "149009664",
	I0827 22:56:55.900663   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900667   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900673   47307 command_runner.go:130] >       },
	I0827 22:56:55.900680   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900689   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900695   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900700   47307 command_runner.go:130] >     },
	I0827 22:56:55.900709   47307 command_runner.go:130] >     {
	I0827 22:56:55.900717   47307 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0827 22:56:55.900727   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900736   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0827 22:56:55.900744   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900751   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900765   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0827 22:56:55.900780   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0827 22:56:55.900788   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900793   47307 command_runner.go:130] >       "size": "95233506",
	I0827 22:56:55.900797   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900801   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900807   47307 command_runner.go:130] >       },
	I0827 22:56:55.900811   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900815   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900820   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900823   47307 command_runner.go:130] >     },
	I0827 22:56:55.900832   47307 command_runner.go:130] >     {
	I0827 22:56:55.900841   47307 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0827 22:56:55.900845   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900853   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0827 22:56:55.900856   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900860   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900881   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0827 22:56:55.900892   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0827 22:56:55.900895   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900900   47307 command_runner.go:130] >       "size": "89437512",
	I0827 22:56:55.900903   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.900907   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.900913   47307 command_runner.go:130] >       },
	I0827 22:56:55.900916   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900920   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900924   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900927   47307 command_runner.go:130] >     },
	I0827 22:56:55.900930   47307 command_runner.go:130] >     {
	I0827 22:56:55.900936   47307 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0827 22:56:55.900939   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.900944   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0827 22:56:55.900947   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900950   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.900959   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0827 22:56:55.900966   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0827 22:56:55.900969   47307 command_runner.go:130] >       ],
	I0827 22:56:55.900973   47307 command_runner.go:130] >       "size": "92728217",
	I0827 22:56:55.900977   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.900980   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.900984   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.900987   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.900990   47307 command_runner.go:130] >     },
	I0827 22:56:55.900993   47307 command_runner.go:130] >     {
	I0827 22:56:55.900999   47307 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0827 22:56:55.901002   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.901006   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0827 22:56:55.901014   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901018   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.901025   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0827 22:56:55.901032   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0827 22:56:55.901035   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901039   47307 command_runner.go:130] >       "size": "68420936",
	I0827 22:56:55.901042   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.901046   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.901052   47307 command_runner.go:130] >       },
	I0827 22:56:55.901056   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.901059   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.901065   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.901068   47307 command_runner.go:130] >     },
	I0827 22:56:55.901071   47307 command_runner.go:130] >     {
	I0827 22:56:55.901077   47307 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0827 22:56:55.901083   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.901088   47307 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0827 22:56:55.901091   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901094   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.901101   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0827 22:56:55.901110   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0827 22:56:55.901113   47307 command_runner.go:130] >       ],
	I0827 22:56:55.901117   47307 command_runner.go:130] >       "size": "742080",
	I0827 22:56:55.901121   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.901124   47307 command_runner.go:130] >         "value": "65535"
	I0827 22:56:55.901128   47307 command_runner.go:130] >       },
	I0827 22:56:55.901132   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.901135   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.901139   47307 command_runner.go:130] >       "pinned": true
	I0827 22:56:55.901142   47307 command_runner.go:130] >     }
	I0827 22:56:55.901146   47307 command_runner.go:130] >   ]
	I0827 22:56:55.901151   47307 command_runner.go:130] > }
	I0827 22:56:55.901330   47307 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:56:55.901341   47307 crio.go:433] Images already preloaded, skipping extraction
	I0827 22:56:55.901388   47307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 22:56:55.935460   47307 command_runner.go:130] > {
	I0827 22:56:55.935478   47307 command_runner.go:130] >   "images": [
	I0827 22:56:55.935483   47307 command_runner.go:130] >     {
	I0827 22:56:55.935490   47307 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0827 22:56:55.935495   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935501   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0827 22:56:55.935505   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935508   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935517   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0827 22:56:55.935523   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0827 22:56:55.935527   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935531   47307 command_runner.go:130] >       "size": "87165492",
	I0827 22:56:55.935535   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935539   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935546   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935553   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935556   47307 command_runner.go:130] >     },
	I0827 22:56:55.935560   47307 command_runner.go:130] >     {
	I0827 22:56:55.935566   47307 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0827 22:56:55.935572   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935577   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0827 22:56:55.935583   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935587   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935596   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0827 22:56:55.935603   47307 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0827 22:56:55.935607   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935615   47307 command_runner.go:130] >       "size": "87190579",
	I0827 22:56:55.935619   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935626   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935632   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935636   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935639   47307 command_runner.go:130] >     },
	I0827 22:56:55.935642   47307 command_runner.go:130] >     {
	I0827 22:56:55.935648   47307 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0827 22:56:55.935655   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935660   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0827 22:56:55.935671   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935675   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935682   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0827 22:56:55.935689   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0827 22:56:55.935695   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935700   47307 command_runner.go:130] >       "size": "1363676",
	I0827 22:56:55.935706   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935710   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935716   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935723   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935727   47307 command_runner.go:130] >     },
	I0827 22:56:55.935735   47307 command_runner.go:130] >     {
	I0827 22:56:55.935744   47307 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0827 22:56:55.935748   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935753   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0827 22:56:55.935759   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935763   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935770   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0827 22:56:55.935785   47307 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0827 22:56:55.935791   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935795   47307 command_runner.go:130] >       "size": "31470524",
	I0827 22:56:55.935801   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935805   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935809   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935813   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935817   47307 command_runner.go:130] >     },
	I0827 22:56:55.935820   47307 command_runner.go:130] >     {
	I0827 22:56:55.935826   47307 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0827 22:56:55.935832   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935837   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0827 22:56:55.935841   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935845   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935851   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0827 22:56:55.935860   47307 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0827 22:56:55.935864   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935868   47307 command_runner.go:130] >       "size": "61245718",
	I0827 22:56:55.935880   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.935886   47307 command_runner.go:130] >       "username": "nonroot",
	I0827 22:56:55.935890   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935893   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935897   47307 command_runner.go:130] >     },
	I0827 22:56:55.935900   47307 command_runner.go:130] >     {
	I0827 22:56:55.935907   47307 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0827 22:56:55.935913   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.935918   47307 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0827 22:56:55.935923   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935927   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.935936   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0827 22:56:55.935943   47307 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0827 22:56:55.935948   47307 command_runner.go:130] >       ],
	I0827 22:56:55.935952   47307 command_runner.go:130] >       "size": "149009664",
	I0827 22:56:55.935955   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.935959   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.935965   47307 command_runner.go:130] >       },
	I0827 22:56:55.935971   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.935975   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.935979   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.935982   47307 command_runner.go:130] >     },
	I0827 22:56:55.935986   47307 command_runner.go:130] >     {
	I0827 22:56:55.935991   47307 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0827 22:56:55.935998   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936002   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0827 22:56:55.936008   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936012   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936019   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0827 22:56:55.936028   47307 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0827 22:56:55.936032   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936036   47307 command_runner.go:130] >       "size": "95233506",
	I0827 22:56:55.936039   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936043   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936047   47307 command_runner.go:130] >       },
	I0827 22:56:55.936050   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936058   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936065   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936068   47307 command_runner.go:130] >     },
	I0827 22:56:55.936072   47307 command_runner.go:130] >     {
	I0827 22:56:55.936077   47307 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0827 22:56:55.936081   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936087   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0827 22:56:55.936092   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936096   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936120   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0827 22:56:55.936129   47307 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0827 22:56:55.936133   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936137   47307 command_runner.go:130] >       "size": "89437512",
	I0827 22:56:55.936143   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936147   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936150   47307 command_runner.go:130] >       },
	I0827 22:56:55.936154   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936158   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936161   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936165   47307 command_runner.go:130] >     },
	I0827 22:56:55.936168   47307 command_runner.go:130] >     {
	I0827 22:56:55.936174   47307 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0827 22:56:55.936180   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936185   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0827 22:56:55.936188   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936192   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936199   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0827 22:56:55.936210   47307 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0827 22:56:55.936215   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936219   47307 command_runner.go:130] >       "size": "92728217",
	I0827 22:56:55.936223   47307 command_runner.go:130] >       "uid": null,
	I0827 22:56:55.936226   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936232   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936235   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936239   47307 command_runner.go:130] >     },
	I0827 22:56:55.936242   47307 command_runner.go:130] >     {
	I0827 22:56:55.936253   47307 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0827 22:56:55.936259   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936263   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0827 22:56:55.936267   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936271   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936278   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0827 22:56:55.936286   47307 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0827 22:56:55.936290   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936294   47307 command_runner.go:130] >       "size": "68420936",
	I0827 22:56:55.936297   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936302   47307 command_runner.go:130] >         "value": "0"
	I0827 22:56:55.936305   47307 command_runner.go:130] >       },
	I0827 22:56:55.936311   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936315   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936321   47307 command_runner.go:130] >       "pinned": false
	I0827 22:56:55.936324   47307 command_runner.go:130] >     },
	I0827 22:56:55.936328   47307 command_runner.go:130] >     {
	I0827 22:56:55.936334   47307 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0827 22:56:55.936340   47307 command_runner.go:130] >       "repoTags": [
	I0827 22:56:55.936344   47307 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0827 22:56:55.936347   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936351   47307 command_runner.go:130] >       "repoDigests": [
	I0827 22:56:55.936364   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0827 22:56:55.936372   47307 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0827 22:56:55.936376   47307 command_runner.go:130] >       ],
	I0827 22:56:55.936380   47307 command_runner.go:130] >       "size": "742080",
	I0827 22:56:55.936383   47307 command_runner.go:130] >       "uid": {
	I0827 22:56:55.936387   47307 command_runner.go:130] >         "value": "65535"
	I0827 22:56:55.936393   47307 command_runner.go:130] >       },
	I0827 22:56:55.936396   47307 command_runner.go:130] >       "username": "",
	I0827 22:56:55.936403   47307 command_runner.go:130] >       "spec": null,
	I0827 22:56:55.936406   47307 command_runner.go:130] >       "pinned": true
	I0827 22:56:55.936410   47307 command_runner.go:130] >     }
	I0827 22:56:55.936414   47307 command_runner.go:130] >   ]
	I0827 22:56:55.936417   47307 command_runner.go:130] > }
	I0827 22:56:55.936549   47307 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 22:56:55.936561   47307 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:56:55.936568   47307 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.0 crio true true} ...
	I0827 22:56:55.936661   47307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-465478 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:56:55.936726   47307 ssh_runner.go:195] Run: crio config
	I0827 22:56:55.968536   47307 command_runner.go:130] ! time="2024-08-27 22:56:55.940728265Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0827 22:56:55.974701   47307 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0827 22:56:55.985062   47307 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0827 22:56:55.985082   47307 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0827 22:56:55.985089   47307 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0827 22:56:55.985092   47307 command_runner.go:130] > #
	I0827 22:56:55.985098   47307 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0827 22:56:55.985104   47307 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0827 22:56:55.985109   47307 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0827 22:56:55.985115   47307 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0827 22:56:55.985119   47307 command_runner.go:130] > # reload'.
	I0827 22:56:55.985125   47307 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0827 22:56:55.985133   47307 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0827 22:56:55.985138   47307 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0827 22:56:55.985146   47307 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0827 22:56:55.985149   47307 command_runner.go:130] > [crio]
	I0827 22:56:55.985157   47307 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0827 22:56:55.985162   47307 command_runner.go:130] > # containers images, in this directory.
	I0827 22:56:55.985172   47307 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0827 22:56:55.985184   47307 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0827 22:56:55.985194   47307 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0827 22:56:55.985206   47307 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0827 22:56:55.985213   47307 command_runner.go:130] > # imagestore = ""
	I0827 22:56:55.985219   47307 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0827 22:56:55.985227   47307 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0827 22:56:55.985233   47307 command_runner.go:130] > storage_driver = "overlay"
	I0827 22:56:55.985239   47307 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0827 22:56:55.985246   47307 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0827 22:56:55.985253   47307 command_runner.go:130] > storage_option = [
	I0827 22:56:55.985260   47307 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0827 22:56:55.985263   47307 command_runner.go:130] > ]
	I0827 22:56:55.985273   47307 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0827 22:56:55.985281   47307 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0827 22:56:55.985286   47307 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0827 22:56:55.985294   47307 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0827 22:56:55.985299   47307 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0827 22:56:55.985306   47307 command_runner.go:130] > # always happen on a node reboot
	I0827 22:56:55.985311   47307 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0827 22:56:55.985322   47307 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0827 22:56:55.985330   47307 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0827 22:56:55.985335   47307 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0827 22:56:55.985341   47307 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0827 22:56:55.985348   47307 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0827 22:56:55.985358   47307 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0827 22:56:55.985362   47307 command_runner.go:130] > # internal_wipe = true
	I0827 22:56:55.985369   47307 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0827 22:56:55.985376   47307 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0827 22:56:55.985381   47307 command_runner.go:130] > # internal_repair = false
	I0827 22:56:55.985388   47307 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0827 22:56:55.985394   47307 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0827 22:56:55.985400   47307 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0827 22:56:55.985405   47307 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0827 22:56:55.985413   47307 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0827 22:56:55.985416   47307 command_runner.go:130] > [crio.api]
	I0827 22:56:55.985423   47307 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0827 22:56:55.985427   47307 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0827 22:56:55.985434   47307 command_runner.go:130] > # IP address on which the stream server will listen.
	I0827 22:56:55.985438   47307 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0827 22:56:55.985444   47307 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0827 22:56:55.985449   47307 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0827 22:56:55.985454   47307 command_runner.go:130] > # stream_port = "0"
	I0827 22:56:55.985459   47307 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0827 22:56:55.985463   47307 command_runner.go:130] > # stream_enable_tls = false
	I0827 22:56:55.985469   47307 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0827 22:56:55.985474   47307 command_runner.go:130] > # stream_idle_timeout = ""
	I0827 22:56:55.985484   47307 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0827 22:56:55.985492   47307 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0827 22:56:55.985499   47307 command_runner.go:130] > # minutes.
	I0827 22:56:55.985506   47307 command_runner.go:130] > # stream_tls_cert = ""
	I0827 22:56:55.985512   47307 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0827 22:56:55.985520   47307 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0827 22:56:55.985524   47307 command_runner.go:130] > # stream_tls_key = ""
	I0827 22:56:55.985529   47307 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0827 22:56:55.985537   47307 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0827 22:56:55.985556   47307 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0827 22:56:55.985563   47307 command_runner.go:130] > # stream_tls_ca = ""
	I0827 22:56:55.985570   47307 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0827 22:56:55.985576   47307 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0827 22:56:55.985583   47307 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0827 22:56:55.985589   47307 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0827 22:56:55.985595   47307 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0827 22:56:55.985601   47307 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0827 22:56:55.985605   47307 command_runner.go:130] > [crio.runtime]
	I0827 22:56:55.985616   47307 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0827 22:56:55.985624   47307 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0827 22:56:55.985627   47307 command_runner.go:130] > # "nofile=1024:2048"
	I0827 22:56:55.985633   47307 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0827 22:56:55.985639   47307 command_runner.go:130] > # default_ulimits = [
	I0827 22:56:55.985642   47307 command_runner.go:130] > # ]
	I0827 22:56:55.985647   47307 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0827 22:56:55.985653   47307 command_runner.go:130] > # no_pivot = false
	I0827 22:56:55.985658   47307 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0827 22:56:55.985666   47307 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0827 22:56:55.985671   47307 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0827 22:56:55.985676   47307 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0827 22:56:55.985683   47307 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0827 22:56:55.985689   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0827 22:56:55.985696   47307 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0827 22:56:55.985700   47307 command_runner.go:130] > # Cgroup setting for conmon
	I0827 22:56:55.985708   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0827 22:56:55.985712   47307 command_runner.go:130] > conmon_cgroup = "pod"
	I0827 22:56:55.985718   47307 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0827 22:56:55.985724   47307 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0827 22:56:55.985736   47307 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0827 22:56:55.985742   47307 command_runner.go:130] > conmon_env = [
	I0827 22:56:55.985747   47307 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0827 22:56:55.985753   47307 command_runner.go:130] > ]
	I0827 22:56:55.985757   47307 command_runner.go:130] > # Additional environment variables to set for all the
	I0827 22:56:55.985766   47307 command_runner.go:130] > # containers. These are overridden if set in the
	I0827 22:56:55.985771   47307 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0827 22:56:55.985778   47307 command_runner.go:130] > # default_env = [
	I0827 22:56:55.985781   47307 command_runner.go:130] > # ]
	I0827 22:56:55.985786   47307 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0827 22:56:55.985795   47307 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0827 22:56:55.985800   47307 command_runner.go:130] > # selinux = false
	I0827 22:56:55.985806   47307 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0827 22:56:55.985814   47307 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0827 22:56:55.985819   47307 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0827 22:56:55.985825   47307 command_runner.go:130] > # seccomp_profile = ""
	I0827 22:56:55.985831   47307 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0827 22:56:55.985837   47307 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0827 22:56:55.985842   47307 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0827 22:56:55.985847   47307 command_runner.go:130] > # which might increase security.
	I0827 22:56:55.985851   47307 command_runner.go:130] > # This option is currently deprecated,
	I0827 22:56:55.985858   47307 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0827 22:56:55.985863   47307 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0827 22:56:55.985869   47307 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0827 22:56:55.985879   47307 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0827 22:56:55.985887   47307 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0827 22:56:55.985893   47307 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0827 22:56:55.985900   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.985904   47307 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0827 22:56:55.985912   47307 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0827 22:56:55.985916   47307 command_runner.go:130] > # the cgroup blockio controller.
	I0827 22:56:55.985922   47307 command_runner.go:130] > # blockio_config_file = ""
	I0827 22:56:55.985928   47307 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0827 22:56:55.985934   47307 command_runner.go:130] > # blockio parameters.
	I0827 22:56:55.985938   47307 command_runner.go:130] > # blockio_reload = false
	I0827 22:56:55.985944   47307 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0827 22:56:55.985953   47307 command_runner.go:130] > # irqbalance daemon.
	I0827 22:56:55.985958   47307 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0827 22:56:55.985968   47307 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0827 22:56:55.985975   47307 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0827 22:56:55.985983   47307 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0827 22:56:55.985989   47307 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0827 22:56:55.985997   47307 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0827 22:56:55.986002   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.986006   47307 command_runner.go:130] > # rdt_config_file = ""
	I0827 22:56:55.986010   47307 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0827 22:56:55.986017   47307 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0827 22:56:55.986044   47307 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0827 22:56:55.986051   47307 command_runner.go:130] > # separate_pull_cgroup = ""
	I0827 22:56:55.986057   47307 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0827 22:56:55.986062   47307 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0827 22:56:55.986065   47307 command_runner.go:130] > # will be added.
	I0827 22:56:55.986069   47307 command_runner.go:130] > # default_capabilities = [
	I0827 22:56:55.986073   47307 command_runner.go:130] > # 	"CHOWN",
	I0827 22:56:55.986076   47307 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0827 22:56:55.986080   47307 command_runner.go:130] > # 	"FSETID",
	I0827 22:56:55.986084   47307 command_runner.go:130] > # 	"FOWNER",
	I0827 22:56:55.986087   47307 command_runner.go:130] > # 	"SETGID",
	I0827 22:56:55.986090   47307 command_runner.go:130] > # 	"SETUID",
	I0827 22:56:55.986094   47307 command_runner.go:130] > # 	"SETPCAP",
	I0827 22:56:55.986097   47307 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0827 22:56:55.986101   47307 command_runner.go:130] > # 	"KILL",
	I0827 22:56:55.986104   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986111   47307 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0827 22:56:55.986119   47307 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0827 22:56:55.986124   47307 command_runner.go:130] > # add_inheritable_capabilities = false
	I0827 22:56:55.986131   47307 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0827 22:56:55.986136   47307 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0827 22:56:55.986142   47307 command_runner.go:130] > default_sysctls = [
	I0827 22:56:55.986147   47307 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0827 22:56:55.986150   47307 command_runner.go:130] > ]
	I0827 22:56:55.986154   47307 command_runner.go:130] > # List of devices on the host that a
	I0827 22:56:55.986165   47307 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0827 22:56:55.986171   47307 command_runner.go:130] > # allowed_devices = [
	I0827 22:56:55.986174   47307 command_runner.go:130] > # 	"/dev/fuse",
	I0827 22:56:55.986178   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986182   47307 command_runner.go:130] > # List of additional devices. specified as
	I0827 22:56:55.986191   47307 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0827 22:56:55.986195   47307 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0827 22:56:55.986205   47307 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0827 22:56:55.986214   47307 command_runner.go:130] > # additional_devices = [
	I0827 22:56:55.986217   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986222   47307 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0827 22:56:55.986228   47307 command_runner.go:130] > # cdi_spec_dirs = [
	I0827 22:56:55.986231   47307 command_runner.go:130] > # 	"/etc/cdi",
	I0827 22:56:55.986235   47307 command_runner.go:130] > # 	"/var/run/cdi",
	I0827 22:56:55.986241   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986246   47307 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0827 22:56:55.986254   47307 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0827 22:56:55.986258   47307 command_runner.go:130] > # Defaults to false.
	I0827 22:56:55.986262   47307 command_runner.go:130] > # device_ownership_from_security_context = false
	I0827 22:56:55.986269   47307 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0827 22:56:55.986274   47307 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0827 22:56:55.986278   47307 command_runner.go:130] > # hooks_dir = [
	I0827 22:56:55.986283   47307 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0827 22:56:55.986288   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986294   47307 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0827 22:56:55.986302   47307 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0827 22:56:55.986306   47307 command_runner.go:130] > # its default mounts from the following two files:
	I0827 22:56:55.986309   47307 command_runner.go:130] > #
	I0827 22:56:55.986315   47307 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0827 22:56:55.986323   47307 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0827 22:56:55.986328   47307 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0827 22:56:55.986333   47307 command_runner.go:130] > #
	I0827 22:56:55.986338   47307 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0827 22:56:55.986345   47307 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0827 22:56:55.986351   47307 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0827 22:56:55.986358   47307 command_runner.go:130] > #      only add mounts it finds in this file.
	I0827 22:56:55.986365   47307 command_runner.go:130] > #
	I0827 22:56:55.986371   47307 command_runner.go:130] > # default_mounts_file = ""
	I0827 22:56:55.986376   47307 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0827 22:56:55.986385   47307 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0827 22:56:55.986388   47307 command_runner.go:130] > pids_limit = 1024
	I0827 22:56:55.986394   47307 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0827 22:56:55.986400   47307 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0827 22:56:55.986406   47307 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0827 22:56:55.986416   47307 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0827 22:56:55.986420   47307 command_runner.go:130] > # log_size_max = -1
	I0827 22:56:55.986426   47307 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0827 22:56:55.986434   47307 command_runner.go:130] > # log_to_journald = false
	I0827 22:56:55.986440   47307 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0827 22:56:55.986444   47307 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0827 22:56:55.986451   47307 command_runner.go:130] > # Path to directory for container attach sockets.
	I0827 22:56:55.986456   47307 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0827 22:56:55.986463   47307 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0827 22:56:55.986466   47307 command_runner.go:130] > # bind_mount_prefix = ""
	I0827 22:56:55.986472   47307 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0827 22:56:55.986475   47307 command_runner.go:130] > # read_only = false
	I0827 22:56:55.986481   47307 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0827 22:56:55.986489   47307 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0827 22:56:55.986493   47307 command_runner.go:130] > # live configuration reload.
	I0827 22:56:55.986498   47307 command_runner.go:130] > # log_level = "info"
	I0827 22:56:55.986503   47307 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0827 22:56:55.986510   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.986513   47307 command_runner.go:130] > # log_filter = ""
	I0827 22:56:55.986521   47307 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0827 22:56:55.986527   47307 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0827 22:56:55.986532   47307 command_runner.go:130] > # separated by comma.
	I0827 22:56:55.986541   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986547   47307 command_runner.go:130] > # uid_mappings = ""
	I0827 22:56:55.986553   47307 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0827 22:56:55.986561   47307 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0827 22:56:55.986565   47307 command_runner.go:130] > # separated by comma.
	I0827 22:56:55.986573   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986582   47307 command_runner.go:130] > # gid_mappings = ""
	I0827 22:56:55.986590   47307 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0827 22:56:55.986595   47307 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0827 22:56:55.986603   47307 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0827 22:56:55.986613   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986619   47307 command_runner.go:130] > # minimum_mappable_uid = -1
	I0827 22:56:55.986624   47307 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0827 22:56:55.986632   47307 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0827 22:56:55.986638   47307 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0827 22:56:55.986646   47307 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0827 22:56:55.986654   47307 command_runner.go:130] > # minimum_mappable_gid = -1
	I0827 22:56:55.986660   47307 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0827 22:56:55.986668   47307 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0827 22:56:55.986673   47307 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0827 22:56:55.986679   47307 command_runner.go:130] > # ctr_stop_timeout = 30
	I0827 22:56:55.986684   47307 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0827 22:56:55.986693   47307 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0827 22:56:55.986697   47307 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0827 22:56:55.986704   47307 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0827 22:56:55.986708   47307 command_runner.go:130] > drop_infra_ctr = false
	I0827 22:56:55.986713   47307 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0827 22:56:55.986719   47307 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0827 22:56:55.986726   47307 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0827 22:56:55.986736   47307 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0827 22:56:55.986745   47307 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0827 22:56:55.986755   47307 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0827 22:56:55.986762   47307 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0827 22:56:55.986768   47307 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0827 22:56:55.986772   47307 command_runner.go:130] > # shared_cpuset = ""
	I0827 22:56:55.986777   47307 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0827 22:56:55.986784   47307 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0827 22:56:55.986788   47307 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0827 22:56:55.986797   47307 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0827 22:56:55.986800   47307 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0827 22:56:55.986806   47307 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0827 22:56:55.986814   47307 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0827 22:56:55.986822   47307 command_runner.go:130] > # enable_criu_support = false
	I0827 22:56:55.986829   47307 command_runner.go:130] > # Enable/disable the generation of the container,
	I0827 22:56:55.986835   47307 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0827 22:56:55.986841   47307 command_runner.go:130] > # enable_pod_events = false
	I0827 22:56:55.986846   47307 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0827 22:56:55.986854   47307 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0827 22:56:55.986859   47307 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0827 22:56:55.986863   47307 command_runner.go:130] > # default_runtime = "runc"
	I0827 22:56:55.986870   47307 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0827 22:56:55.986877   47307 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0827 22:56:55.986887   47307 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0827 22:56:55.986896   47307 command_runner.go:130] > # creation as a file is not desired either.
	I0827 22:56:55.986903   47307 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0827 22:56:55.986911   47307 command_runner.go:130] > # the hostname is being managed dynamically.
	I0827 22:56:55.986915   47307 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0827 22:56:55.986921   47307 command_runner.go:130] > # ]
	I0827 22:56:55.986928   47307 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0827 22:56:55.986937   47307 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0827 22:56:55.986943   47307 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0827 22:56:55.986950   47307 command_runner.go:130] > # Each entry in the table should follow the format:
	I0827 22:56:55.986953   47307 command_runner.go:130] > #
	I0827 22:56:55.986959   47307 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0827 22:56:55.986964   47307 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0827 22:56:55.987007   47307 command_runner.go:130] > # runtime_type = "oci"
	I0827 22:56:55.987014   47307 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0827 22:56:55.987018   47307 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0827 22:56:55.987025   47307 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0827 22:56:55.987029   47307 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0827 22:56:55.987033   47307 command_runner.go:130] > # monitor_env = []
	I0827 22:56:55.987037   47307 command_runner.go:130] > # privileged_without_host_devices = false
	I0827 22:56:55.987041   47307 command_runner.go:130] > # allowed_annotations = []
	I0827 22:56:55.987047   47307 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0827 22:56:55.987050   47307 command_runner.go:130] > # Where:
	I0827 22:56:55.987055   47307 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0827 22:56:55.987063   47307 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0827 22:56:55.987069   47307 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0827 22:56:55.987089   47307 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0827 22:56:55.987095   47307 command_runner.go:130] > #   in $PATH.
	I0827 22:56:55.987101   47307 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0827 22:56:55.987105   47307 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0827 22:56:55.987113   47307 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0827 22:56:55.987117   47307 command_runner.go:130] > #   state.
	I0827 22:56:55.987123   47307 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0827 22:56:55.987128   47307 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0827 22:56:55.987134   47307 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0827 22:56:55.987141   47307 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0827 22:56:55.987147   47307 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0827 22:56:55.987155   47307 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0827 22:56:55.987162   47307 command_runner.go:130] > #   The currently recognized values are:
	I0827 22:56:55.987170   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0827 22:56:55.987177   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0827 22:56:55.987185   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0827 22:56:55.987190   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0827 22:56:55.987197   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0827 22:56:55.987205   47307 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0827 22:56:55.987211   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0827 22:56:55.987219   47307 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0827 22:56:55.987225   47307 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0827 22:56:55.987233   47307 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0827 22:56:55.987237   47307 command_runner.go:130] > #   deprecated option "conmon".
	I0827 22:56:55.987245   47307 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0827 22:56:55.987250   47307 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0827 22:56:55.987260   47307 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0827 22:56:55.987264   47307 command_runner.go:130] > #   should be moved to the container's cgroup
	I0827 22:56:55.987270   47307 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0827 22:56:55.987276   47307 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0827 22:56:55.987281   47307 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0827 22:56:55.987286   47307 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0827 22:56:55.987291   47307 command_runner.go:130] > #
	I0827 22:56:55.987300   47307 command_runner.go:130] > # Using the seccomp notifier feature:
	I0827 22:56:55.987303   47307 command_runner.go:130] > #
	I0827 22:56:55.987308   47307 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0827 22:56:55.987317   47307 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0827 22:56:55.987320   47307 command_runner.go:130] > #
	I0827 22:56:55.987326   47307 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0827 22:56:55.987331   47307 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0827 22:56:55.987334   47307 command_runner.go:130] > #
	I0827 22:56:55.987340   47307 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0827 22:56:55.987343   47307 command_runner.go:130] > # feature.
	I0827 22:56:55.987345   47307 command_runner.go:130] > #
	I0827 22:56:55.987351   47307 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0827 22:56:55.987356   47307 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0827 22:56:55.987361   47307 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0827 22:56:55.987368   47307 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0827 22:56:55.987374   47307 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0827 22:56:55.987377   47307 command_runner.go:130] > #
	I0827 22:56:55.987382   47307 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0827 22:56:55.987387   47307 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0827 22:56:55.987389   47307 command_runner.go:130] > #
	I0827 22:56:55.987395   47307 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0827 22:56:55.987400   47307 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0827 22:56:55.987403   47307 command_runner.go:130] > #
	I0827 22:56:55.987409   47307 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0827 22:56:55.987414   47307 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0827 22:56:55.987417   47307 command_runner.go:130] > # limitation.
	I0827 22:56:55.987422   47307 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0827 22:56:55.987425   47307 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0827 22:56:55.987429   47307 command_runner.go:130] > runtime_type = "oci"
	I0827 22:56:55.987433   47307 command_runner.go:130] > runtime_root = "/run/runc"
	I0827 22:56:55.987437   47307 command_runner.go:130] > runtime_config_path = ""
	I0827 22:56:55.987444   47307 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0827 22:56:55.987448   47307 command_runner.go:130] > monitor_cgroup = "pod"
	I0827 22:56:55.987452   47307 command_runner.go:130] > monitor_exec_cgroup = ""
	I0827 22:56:55.987456   47307 command_runner.go:130] > monitor_env = [
	I0827 22:56:55.987461   47307 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0827 22:56:55.987466   47307 command_runner.go:130] > ]
	I0827 22:56:55.987470   47307 command_runner.go:130] > privileged_without_host_devices = false
	I0827 22:56:55.987478   47307 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0827 22:56:55.987487   47307 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0827 22:56:55.987495   47307 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0827 22:56:55.987502   47307 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0827 22:56:55.987509   47307 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0827 22:56:55.987516   47307 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0827 22:56:55.987524   47307 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0827 22:56:55.987534   47307 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0827 22:56:55.987539   47307 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0827 22:56:55.987546   47307 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0827 22:56:55.987549   47307 command_runner.go:130] > # Example:
	I0827 22:56:55.987553   47307 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0827 22:56:55.987557   47307 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0827 22:56:55.987564   47307 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0827 22:56:55.987568   47307 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0827 22:56:55.987572   47307 command_runner.go:130] > # cpuset = 0
	I0827 22:56:55.987575   47307 command_runner.go:130] > # cpushares = "0-1"
	I0827 22:56:55.987578   47307 command_runner.go:130] > # Where:
	I0827 22:56:55.987582   47307 command_runner.go:130] > # The workload name is workload-type.
	I0827 22:56:55.987588   47307 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0827 22:56:55.987597   47307 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0827 22:56:55.987602   47307 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0827 22:56:55.987612   47307 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0827 22:56:55.987617   47307 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0827 22:56:55.987621   47307 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0827 22:56:55.987627   47307 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0827 22:56:55.987631   47307 command_runner.go:130] > # Default value is set to true
	I0827 22:56:55.987635   47307 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0827 22:56:55.987640   47307 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0827 22:56:55.987644   47307 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0827 22:56:55.987648   47307 command_runner.go:130] > # Default value is set to 'false'
	I0827 22:56:55.987652   47307 command_runner.go:130] > # disable_hostport_mapping = false
	I0827 22:56:55.987658   47307 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0827 22:56:55.987662   47307 command_runner.go:130] > #
	I0827 22:56:55.987666   47307 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0827 22:56:55.987672   47307 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0827 22:56:55.987678   47307 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0827 22:56:55.987687   47307 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0827 22:56:55.987692   47307 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0827 22:56:55.987696   47307 command_runner.go:130] > [crio.image]
	I0827 22:56:55.987701   47307 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0827 22:56:55.987705   47307 command_runner.go:130] > # default_transport = "docker://"
	I0827 22:56:55.987710   47307 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0827 22:56:55.987716   47307 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0827 22:56:55.987721   47307 command_runner.go:130] > # global_auth_file = ""
	I0827 22:56:55.987726   47307 command_runner.go:130] > # The image used to instantiate infra containers.
	I0827 22:56:55.987730   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.987735   47307 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0827 22:56:55.987741   47307 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0827 22:56:55.987746   47307 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0827 22:56:55.987753   47307 command_runner.go:130] > # This option supports live configuration reload.
	I0827 22:56:55.987759   47307 command_runner.go:130] > # pause_image_auth_file = ""
	I0827 22:56:55.987767   47307 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0827 22:56:55.987772   47307 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0827 22:56:55.987780   47307 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0827 22:56:55.987786   47307 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0827 22:56:55.987792   47307 command_runner.go:130] > # pause_command = "/pause"
	I0827 22:56:55.987799   47307 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0827 22:56:55.987807   47307 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0827 22:56:55.987812   47307 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0827 22:56:55.987821   47307 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0827 22:56:55.987826   47307 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0827 22:56:55.987834   47307 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0827 22:56:55.987840   47307 command_runner.go:130] > # pinned_images = [
	I0827 22:56:55.987843   47307 command_runner.go:130] > # ]
	I0827 22:56:55.987849   47307 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0827 22:56:55.987857   47307 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0827 22:56:55.987862   47307 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0827 22:56:55.987870   47307 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0827 22:56:55.987875   47307 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0827 22:56:55.987880   47307 command_runner.go:130] > # signature_policy = ""
	I0827 22:56:55.987885   47307 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0827 22:56:55.987893   47307 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0827 22:56:55.987903   47307 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0827 22:56:55.987912   47307 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0827 22:56:55.987918   47307 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0827 22:56:55.987922   47307 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0827 22:56:55.987928   47307 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0827 22:56:55.987936   47307 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0827 22:56:55.987940   47307 command_runner.go:130] > # changing them here.
	I0827 22:56:55.987946   47307 command_runner.go:130] > # insecure_registries = [
	I0827 22:56:55.987949   47307 command_runner.go:130] > # ]
	I0827 22:56:55.987955   47307 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0827 22:56:55.987962   47307 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0827 22:56:55.987966   47307 command_runner.go:130] > # image_volumes = "mkdir"
	I0827 22:56:55.987973   47307 command_runner.go:130] > # Temporary directory to use for storing big files
	I0827 22:56:55.987977   47307 command_runner.go:130] > # big_files_temporary_dir = ""
	I0827 22:56:55.987988   47307 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0827 22:56:55.987994   47307 command_runner.go:130] > # CNI plugins.
	I0827 22:56:55.987997   47307 command_runner.go:130] > [crio.network]
	I0827 22:56:55.988002   47307 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0827 22:56:55.988007   47307 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0827 22:56:55.988013   47307 command_runner.go:130] > # cni_default_network = ""
	I0827 22:56:55.988018   47307 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0827 22:56:55.988024   47307 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0827 22:56:55.988029   47307 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0827 22:56:55.988035   47307 command_runner.go:130] > # plugin_dirs = [
	I0827 22:56:55.988038   47307 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0827 22:56:55.988041   47307 command_runner.go:130] > # ]
	I0827 22:56:55.988047   47307 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0827 22:56:55.988052   47307 command_runner.go:130] > [crio.metrics]
	I0827 22:56:55.988057   47307 command_runner.go:130] > # Globally enable or disable metrics support.
	I0827 22:56:55.988063   47307 command_runner.go:130] > enable_metrics = true
	I0827 22:56:55.988067   47307 command_runner.go:130] > # Specify enabled metrics collectors.
	I0827 22:56:55.988071   47307 command_runner.go:130] > # Per default all metrics are enabled.
	I0827 22:56:55.988079   47307 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0827 22:56:55.988084   47307 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0827 22:56:55.988090   47307 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0827 22:56:55.988095   47307 command_runner.go:130] > # metrics_collectors = [
	I0827 22:56:55.988103   47307 command_runner.go:130] > # 	"operations",
	I0827 22:56:55.988109   47307 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0827 22:56:55.988113   47307 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0827 22:56:55.988120   47307 command_runner.go:130] > # 	"operations_errors",
	I0827 22:56:55.988124   47307 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0827 22:56:55.988128   47307 command_runner.go:130] > # 	"image_pulls_by_name",
	I0827 22:56:55.988132   47307 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0827 22:56:55.988139   47307 command_runner.go:130] > # 	"image_pulls_failures",
	I0827 22:56:55.988143   47307 command_runner.go:130] > # 	"image_pulls_successes",
	I0827 22:56:55.988149   47307 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0827 22:56:55.988153   47307 command_runner.go:130] > # 	"image_layer_reuse",
	I0827 22:56:55.988157   47307 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0827 22:56:55.988163   47307 command_runner.go:130] > # 	"containers_oom_total",
	I0827 22:56:55.988166   47307 command_runner.go:130] > # 	"containers_oom",
	I0827 22:56:55.988170   47307 command_runner.go:130] > # 	"processes_defunct",
	I0827 22:56:55.988174   47307 command_runner.go:130] > # 	"operations_total",
	I0827 22:56:55.988178   47307 command_runner.go:130] > # 	"operations_latency_seconds",
	I0827 22:56:55.988182   47307 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0827 22:56:55.988187   47307 command_runner.go:130] > # 	"operations_errors_total",
	I0827 22:56:55.988191   47307 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0827 22:56:55.988195   47307 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0827 22:56:55.988199   47307 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0827 22:56:55.988205   47307 command_runner.go:130] > # 	"image_pulls_success_total",
	I0827 22:56:55.988215   47307 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0827 22:56:55.988222   47307 command_runner.go:130] > # 	"containers_oom_count_total",
	I0827 22:56:55.988231   47307 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0827 22:56:55.988238   47307 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0827 22:56:55.988241   47307 command_runner.go:130] > # ]
	I0827 22:56:55.988245   47307 command_runner.go:130] > # The port on which the metrics server will listen.
	I0827 22:56:55.988251   47307 command_runner.go:130] > # metrics_port = 9090
	I0827 22:56:55.988256   47307 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0827 22:56:55.988260   47307 command_runner.go:130] > # metrics_socket = ""
	I0827 22:56:55.988265   47307 command_runner.go:130] > # The certificate for the secure metrics server.
	I0827 22:56:55.988273   47307 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0827 22:56:55.988279   47307 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0827 22:56:55.988285   47307 command_runner.go:130] > # certificate on any modification event.
	I0827 22:56:55.988294   47307 command_runner.go:130] > # metrics_cert = ""
	I0827 22:56:55.988301   47307 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0827 22:56:55.988306   47307 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0827 22:56:55.988312   47307 command_runner.go:130] > # metrics_key = ""
	I0827 22:56:55.988317   47307 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0827 22:56:55.988323   47307 command_runner.go:130] > [crio.tracing]
	I0827 22:56:55.988328   47307 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0827 22:56:55.988331   47307 command_runner.go:130] > # enable_tracing = false
	I0827 22:56:55.988339   47307 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0827 22:56:55.988343   47307 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0827 22:56:55.988352   47307 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0827 22:56:55.988358   47307 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0827 22:56:55.988362   47307 command_runner.go:130] > # CRI-O NRI configuration.
	I0827 22:56:55.988367   47307 command_runner.go:130] > [crio.nri]
	I0827 22:56:55.988371   47307 command_runner.go:130] > # Globally enable or disable NRI.
	I0827 22:56:55.988376   47307 command_runner.go:130] > # enable_nri = false
	I0827 22:56:55.988380   47307 command_runner.go:130] > # NRI socket to listen on.
	I0827 22:56:55.988386   47307 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0827 22:56:55.988390   47307 command_runner.go:130] > # NRI plugin directory to use.
	I0827 22:56:55.988397   47307 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0827 22:56:55.988401   47307 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0827 22:56:55.988407   47307 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0827 22:56:55.988412   47307 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0827 22:56:55.988417   47307 command_runner.go:130] > # nri_disable_connections = false
	I0827 22:56:55.988422   47307 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0827 22:56:55.988429   47307 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0827 22:56:55.988433   47307 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0827 22:56:55.988440   47307 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0827 22:56:55.988445   47307 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0827 22:56:55.988451   47307 command_runner.go:130] > [crio.stats]
	I0827 22:56:55.988458   47307 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0827 22:56:55.988476   47307 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0827 22:56:55.988482   47307 command_runner.go:130] > # stats_collection_period = 0
	I0827 22:56:55.988628   47307 cni.go:84] Creating CNI manager for ""
	I0827 22:56:55.988641   47307 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0827 22:56:55.988649   47307 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:56:55.988677   47307 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-465478 NodeName:multinode-465478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:56:55.988801   47307 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-465478"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:56:55.988858   47307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:56:55.998657   47307 command_runner.go:130] > kubeadm
	I0827 22:56:55.998678   47307 command_runner.go:130] > kubectl
	I0827 22:56:55.998682   47307 command_runner.go:130] > kubelet
	I0827 22:56:55.998705   47307 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:56:55.998770   47307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 22:56:56.007867   47307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0827 22:56:56.023603   47307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:56:56.038714   47307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0827 22:56:56.054288   47307 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0827 22:56:56.057680   47307 command_runner.go:130] > 192.168.39.203	control-plane.minikube.internal
	I0827 22:56:56.057751   47307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:56:56.198826   47307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:56:56.213630   47307 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478 for IP: 192.168.39.203
	I0827 22:56:56.213655   47307 certs.go:194] generating shared ca certs ...
	I0827 22:56:56.213670   47307 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:56:56.213840   47307 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 22:56:56.213884   47307 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 22:56:56.213894   47307 certs.go:256] generating profile certs ...
	I0827 22:56:56.213977   47307 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/client.key
	I0827 22:56:56.214029   47307 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key.be360bcd
	I0827 22:56:56.214066   47307 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key
	I0827 22:56:56.214076   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0827 22:56:56.214088   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0827 22:56:56.214100   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0827 22:56:56.214112   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0827 22:56:56.214128   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0827 22:56:56.214141   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0827 22:56:56.214153   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0827 22:56:56.214165   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0827 22:56:56.214214   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 22:56:56.214241   47307 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 22:56:56.214248   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 22:56:56.214266   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 22:56:56.214289   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:56:56.214315   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 22:56:56.214357   47307 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 22:56:56.214399   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem -> /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.214412   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.214424   47307 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.215090   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:56:56.237665   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:56:56.259556   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:56:56.280825   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 22:56:56.302229   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0827 22:56:56.324167   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 22:56:56.345744   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:56:56.366862   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/multinode-465478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 22:56:56.388515   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 22:56:56.409320   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 22:56:56.430632   47307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:56:56.452026   47307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:56:56.466803   47307 ssh_runner.go:195] Run: openssl version
	I0827 22:56:56.472084   47307 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0827 22:56:56.472236   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 22:56:56.482210   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486231   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486257   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.486292   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 22:56:56.491349   47307 command_runner.go:130] > 3ec20f2e
	I0827 22:56:56.491410   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 22:56:56.500021   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:56:56.509768   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.513640   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.513957   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.514007   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:56:56.519169   47307 command_runner.go:130] > b5213941
	I0827 22:56:56.519235   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:56:56.528679   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 22:56:56.539040   47307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543209   47307 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543240   47307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.543276   47307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 22:56:56.548579   47307 command_runner.go:130] > 51391683
	I0827 22:56:56.548648   47307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 22:56:56.558263   47307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:56:56.562314   47307 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:56:56.562343   47307 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0827 22:56:56.562352   47307 command_runner.go:130] > Device: 253,1	Inode: 6291478     Links: 1
	I0827 22:56:56.562360   47307 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0827 22:56:56.562376   47307 command_runner.go:130] > Access: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562390   47307 command_runner.go:130] > Modify: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562400   47307 command_runner.go:130] > Change: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562408   47307 command_runner.go:130] >  Birth: 2024-08-27 22:50:03.833226121 +0000
	I0827 22:56:56.562458   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 22:56:56.567556   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.567662   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 22:56:56.572773   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.572907   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 22:56:56.578092   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.578229   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 22:56:56.583433   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.583485   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 22:56:56.588365   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.588586   47307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 22:56:56.593660   47307 command_runner.go:130] > Certificate will not expire
	I0827 22:56:56.593723   47307 kubeadm.go:392] StartCluster: {Name:multinode-465478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-465478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:56:56.593829   47307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 22:56:56.593898   47307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 22:56:56.627391   47307 command_runner.go:130] > 015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908
	I0827 22:56:56.627415   47307 command_runner.go:130] > ef8842da2a1926e369837bbfae1b7e10bb02da45e379e84d93b0cbe06f7e7855
	I0827 22:56:56.627421   47307 command_runner.go:130] > d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544
	I0827 22:56:56.627427   47307 command_runner.go:130] > 7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519
	I0827 22:56:56.627432   47307 command_runner.go:130] > 827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499
	I0827 22:56:56.627437   47307 command_runner.go:130] > 2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6
	I0827 22:56:56.627445   47307 command_runner.go:130] > ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec
	I0827 22:56:56.627453   47307 command_runner.go:130] > 30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1
	I0827 22:56:56.627490   47307 command_runner.go:130] > 855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3
	I0827 22:56:56.628871   47307 cri.go:89] found id: "015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908"
	I0827 22:56:56.628892   47307 cri.go:89] found id: "ef8842da2a1926e369837bbfae1b7e10bb02da45e379e84d93b0cbe06f7e7855"
	I0827 22:56:56.628898   47307 cri.go:89] found id: "d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544"
	I0827 22:56:56.628902   47307 cri.go:89] found id: "7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519"
	I0827 22:56:56.628907   47307 cri.go:89] found id: "827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499"
	I0827 22:56:56.628914   47307 cri.go:89] found id: "2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6"
	I0827 22:56:56.628921   47307 cri.go:89] found id: "ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec"
	I0827 22:56:56.628926   47307 cri.go:89] found id: "30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1"
	I0827 22:56:56.628933   47307 cri.go:89] found id: "855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3"
	I0827 22:56:56.628941   47307 cri.go:89] found id: ""
	I0827 22:56:56.628987   47307 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 27 23:01:05 multinode-465478 crio[2838]: time="2024-08-27 23:01:05.987613990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799665987589290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff02ef85-da13-42f1-915f-53481c2a01e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:05 multinode-465478 crio[2838]: time="2024-08-27 23:01:05.988275029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=487bcc76-2891-4bc5-a0b4-df74c6c4f201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:05 multinode-465478 crio[2838]: time="2024-08-27 23:01:05.988333016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=487bcc76-2891-4bc5-a0b4-df74c6c4f201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:05 multinode-465478 crio[2838]: time="2024-08-27 23:01:05.988681662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=487bcc76-2891-4bc5-a0b4-df74c6c4f201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.032747195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67388547-02ab-4989-a2ad-ee9380812037 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.032830573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67388547-02ab-4989-a2ad-ee9380812037 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.034429000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1a64ec1-e6e9-4864-a01f-f76bd18fc204 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.035052487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799666035017813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1a64ec1-e6e9-4864-a01f-f76bd18fc204 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.036173197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be337d0f-bec6-4222-a16b-9481226e9472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.036351731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be337d0f-bec6-4222-a16b-9481226e9472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.036922870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be337d0f-bec6-4222-a16b-9481226e9472 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.087568592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ec07e43-a3a1-4068-8814-1ee37f1a57d7 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.087656698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ec07e43-a3a1-4068-8814-1ee37f1a57d7 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.089636255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685d275f-0eb3-483c-8112-09754cb01c51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.090079389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799666090053175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685d275f-0eb3-483c-8112-09754cb01c51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.090720060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0b60a4a-db7d-4e3b-920f-7c58d4d15756 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.090797242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0b60a4a-db7d-4e3b-920f-7c58d4d15756 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.091587770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0b60a4a-db7d-4e3b-920f-7c58d4d15756 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.139938414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8aacd258-1533-42e0-bebe-eefb30214165 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.140033704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8aacd258-1533-42e0-bebe-eefb30214165 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.141311079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56d7b7a5-5c8a-4aa9-a4ec-c191f27941d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.141740388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799666141716059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56d7b7a5-5c8a-4aa9-a4ec-c191f27941d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.142157940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3615ba96-ae2d-4270-a2e2-d685fb05d5a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.142214310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3615ba96-ae2d-4270-a2e2-d685fb05d5a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:01:06 multinode-465478 crio[2838]: time="2024-08-27 23:01:06.142701528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03f13a933ea01b48c9975c987081e4a2c4b5eeda7fa2dfdb2de697adf252c11a,PodSandboxId:9b275278e9ebfad8fe285d9762956c3edd8d73b081fb5b98536b598a308e2098,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724799456436950368,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065,PodSandboxId:50e97919d8e1ca38cb6a0f589abd1adb6c5a078b53ded2e757a7be7fe9607d86,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724799422944487668,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef,PodSandboxId:06812a3f6bedc6ec75d5ce355930a5eff4ecbd6ba9872fe65b439dbca558976b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724799422748321357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d230ca4fe79e8a747bf7e9c1975d82bac1b5108d0b75174ddb3cbaf27b72e43e,PodSandboxId:17b1de931c58e16a733a7c9396e0c9860c1b504639fc590b329e6a8ce2435a3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724799422768958066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2,PodSandboxId:cf93392a97b6773acd503a02a73a1891a75c242374f44a1c19a9d901b6de05d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724799422697736724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb,PodSandboxId:25ed24d2b4d3a6652fef656e31354666ab445421bc48de7b8dad3cbb5ca9341f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724799418904472162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e,PodSandboxId:2dcac5c4436bf0af23cc5454404df3879144ad73fac931a7ad8fe0d7daa0da58,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724799418889661013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]string{io.kubernetes.container.hash: f72d
0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa,PodSandboxId:3d9d1aadd56403c35e427aca22fcb6ac0433cf32af795cd346e6db918e40623e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724799418886009125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05,PodSandboxId:e52dd627b3b14833e306e38a9a003d448664e494463225c4a5185b714860ac0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724799418816579778,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908,PodSandboxId:6ff886066d4e82fa107073d12cc6fce58c030c3e4b528e55262b8f20bcd27116,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724799405336472330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gj4hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40942699-6970-4dc9-baa9-e0c87617b85b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abacf11082d9b8f2958ed11ff93149faf23bfe8011d19bc6b9325eed59cab29f,PodSandboxId:51d1305a97dcb6450ba2ac785e9090e81b4ae1e945686118cac9cef079dcf3fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724799085697971420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j67n7,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb273b92-155a-4a8a-9f93-3474e86b1e51,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1aeddd6a3284d1f551ca491ba90e7dd057c3fba81559f99838f8616224be544,PodSandboxId:c087196811cffda6023ecdb6e49d319397a6a378810246c8fe81c2939b89f61c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724799033980626634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: cb28a488-a7a9-4b84-b592-8380454c672f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519,PodSandboxId:e1491b885f8dbb20a5a2ab641f78a5777e187703ddcbbb9dd398664eac0423fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724799022083037610,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rljzm,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a34abf39-3421-48f1-bbe8-5ffe6a0d9c1a,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499,PodSandboxId:97c971208b462e1bb49bf91051e7ac5e45b9d5693f1e2e4cc1af5327994c325d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724799018545517275,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dc2v7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2d3f49d3-14e1-4b81-9111-88f3d31e51b2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6,PodSandboxId:584d32decdff1a400b813ff1909c15ba8409caedefe7922037d74e85bf36a64f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724799007814693407,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b
f9a1a728342ab6f79f2904a02be377,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec,PodSandboxId:25aa276b34f1f854fd62b250c535e347fabcef0ad4bb01331188826c34fc8030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724799007792322223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79eb6ba02ed60a5e3eb7960054775086,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1,PodSandboxId:6055c78681fcc8bcd62414028fafa21b197112b8ffba226390b307b819f75edb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724799007742536399,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2411c2ba8586cba3718ed73a66e07bff,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3,PodSandboxId:8279eef7e32f949b37b5df5376de05e30e6d5187d5456245a721d85ce5f624b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724799007701823615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-465478,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1539cde0a0a36c54879d186621a2cc1,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3615ba96-ae2d-4270-a2e2-d685fb05d5a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03f13a933ea01       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9b275278e9ebf       busybox-7dff88458-j67n7
	f96a3e43f2516       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   50e97919d8e1c       kindnet-rljzm
	d230ca4fe79e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   17b1de931c58e       storage-provisioner
	32f1f5a9e23ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   06812a3f6bedc       coredns-6f6b679f8f-gj4hr
	de730b11023bb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   cf93392a97b67       kube-proxy-dc2v7
	3e6dcde1fec1d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   25ed24d2b4d3a       kube-scheduler-multinode-465478
	0a1c64bb0ada0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   2dcac5c4436bf       kube-apiserver-multinode-465478
	1c5241296cf9c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   3d9d1aadd5640       etcd-multinode-465478
	d2654e5707701       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   e52dd627b3b14       kube-controller-manager-multinode-465478
	015744245af62       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   6ff886066d4e8       coredns-6f6b679f8f-gj4hr
	abacf11082d9b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   51d1305a97dcb       busybox-7dff88458-j67n7
	d1aeddd6a3284       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c087196811cff       storage-provisioner
	7bd5a7a1c1c9f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   e1491b885f8db       kindnet-rljzm
	827bc3f7e5631       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   97c971208b462       kube-proxy-dc2v7
	2597b46782de6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   584d32decdff1       kube-scheduler-multinode-465478
	ab19f142adda1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   25aa276b34f1f       etcd-multinode-465478
	30e40e98f2f39       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   6055c78681fcc       kube-apiserver-multinode-465478
	855e93985a2f5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   8279eef7e32f9       kube-controller-manager-multinode-465478
	
	
	==> coredns [015744245af6200c3c4e94022249c8d3742b97501d38aceee722804e2d93d908] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55072 - 10935 "HINFO IN 191575834805188837.5841183418608395440. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010368892s
	
	
	==> coredns [32f1f5a9e23abdfac0e6b957ede79fc933db4e3dbfc47ee0806cb63a99e188ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46661 - 35624 "HINFO IN 6064224368019421635.5774456436384908123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011420854s
	
	
	==> describe nodes <==
	Name:               multinode-465478
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-465478
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=multinode-465478
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_50_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:50:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-465478
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:01:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:57:01 +0000   Tue, 27 Aug 2024 22:50:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-465478
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 801a03968a924d54a795372514743338
	  System UUID:                801a0396-8a92-4d54-a795-372514743338
	  Boot ID:                    13263c25-4cc2-45a2-97a8-b5c453fc8328
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j67n7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 coredns-6f6b679f8f-gj4hr                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-465478                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-rljzm                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-465478             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-465478    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-dc2v7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-465478             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-465478 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-465478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-465478 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-465478 event: Registered Node multinode-465478 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-465478 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-465478 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-465478 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-465478 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-465478 event: Registered Node multinode-465478 in Controller
	
	
	Name:               multinode-465478-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-465478-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=multinode-465478
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_27T22_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:57:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-465478-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:58:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:59:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:59:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:59:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 27 Aug 2024 22:58:13 +0000   Tue, 27 Aug 2024 22:59:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    multinode-465478-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ea9254f1e0b4048a297bfb38bbb05ec
	  System UUID:                2ea9254f-1e0b-4048-a297-bfb38bbb05ec
	  Boot ID:                    7573361a-1317-4c22-904d-e9ef094d8330
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-msj9p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-2gs8n              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-8nfs4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-465478-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-465478-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-465478-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m47s                  kubelet          Node multinode-465478-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-465478-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-465478-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-465478-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-465478-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-465478-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053313] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.152499] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.128033] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.247480] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[Aug27 22:50] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +3.540174] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.066170] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.487873] systemd-fstab-generator[1198]: Ignoring "noauto" option for root device
	[  +0.087914] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.088664] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.138754] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.900950] kauditd_printk_skb: 60 callbacks suppressed
	[Aug27 22:51] kauditd_printk_skb: 14 callbacks suppressed
	[Aug27 22:56] systemd-fstab-generator[2656]: Ignoring "noauto" option for root device
	[  +0.153954] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.233284] systemd-fstab-generator[2748]: Ignoring "noauto" option for root device
	[  +0.201484] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.282227] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[ +10.230305] systemd-fstab-generator[2944]: Ignoring "noauto" option for root device
	[  +0.081698] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.782620] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	[Aug27 22:57] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.782217] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.793674] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +18.213697] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [1c5241296cf9c1bb21b04e91e762496d69b9b7bb7ff0ffd9f9cea1c054a37efa] <==
	{"level":"info","ts":"2024-08-27T22:56:59.213666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:56:59.213755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:56:59.214100Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"28dd8e6bbca035f5","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-27T22:56:59.218572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.229818Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T22:56:59.230135Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T22:56:59.230171Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T22:56:59.230296Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:56:59.230316Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:56:59.764310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-08-27T22:56:59.764407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.764428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-08-27T22:56:59.770644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:56:59.771693Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.774511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.203:2379"}
	{"level":"info","ts":"2024-08-27T22:56:59.774812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:56:59.775456Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:56:59.776128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T22:56:59.789159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T22:56:59.789201Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T22:56:59.770600Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:multinode-465478 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	
	
	==> etcd [ab19f142adda10d570ca23112687265cf045649f4f3268a2a5247c1f7e53a0ec] <==
	{"level":"info","ts":"2024-08-27T22:51:09.431067Z","caller":"traceutil/trace.go:171","msg":"trace[744458532] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"189.932497ms","start":"2024-08-27T22:51:09.241118Z","end":"2024-08-27T22:51:09.431050Z","steps":["trace[744458532] 'process raft request'  (duration: 62.45434ms)","trace[744458532] 'compare'  (duration: 126.761507ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-27T22:52:00.909387Z","caller":"traceutil/trace.go:171","msg":"trace[162212405] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"168.226835ms","start":"2024-08-27T22:52:00.741142Z","end":"2024-08-27T22:52:00.909369Z","steps":["trace[162212405] 'process raft request'  (duration: 168.052201ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:52:01.194035Z","caller":"traceutil/trace.go:171","msg":"trace[291470518] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"159.526943ms","start":"2024-08-27T22:52:01.034491Z","end":"2024-08-27T22:52:01.194018Z","steps":["trace[291470518] 'process raft request'  (duration: 98.655423ms)","trace[291470518] 'compare'  (duration: 60.784862ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.711547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.572947ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888173926873632674 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-x2d9q\" mod_revision:581 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-27T22:52:01.712421Z","caller":"traceutil/trace.go:171","msg":"trace[394810362] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"318.35492ms","start":"2024-08-27T22:52:01.394049Z","end":"2024-08-27T22:52:01.712404Z","steps":["trace[394810362] 'read index received'  (duration: 78.457368ms)","trace[394810362] 'applied index is now lower than readState.Index'  (duration: 239.895792ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.712548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.4776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-27T22:52:01.712580Z","caller":"traceutil/trace.go:171","msg":"trace[1193444802] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:582; }","duration":"318.526049ms","start":"2024-08-27T22:52:01.394043Z","end":"2024-08-27T22:52:01.712569Z","steps":["trace[1193444802] 'agreement among raft nodes before linearized reading'  (duration: 318.435662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-27T22:52:01.712677Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-27T22:52:01.393999Z","time spent":"318.653068ms","remote":"127.0.0.1:57196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":12,"response size":29,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true "}
	{"level":"info","ts":"2024-08-27T22:52:01.712909Z","caller":"traceutil/trace.go:171","msg":"trace[786201546] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"442.033471ms","start":"2024-08-27T22:52:01.270861Z","end":"2024-08-27T22:52:01.712895Z","steps":["trace[786201546] 'process raft request'  (duration: 201.684602ms)","trace[786201546] 'compare'  (duration: 238.465093ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-27T22:52:01.713011Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-27T22:52:01.270844Z","time spent":"442.113957ms","remote":"127.0.0.1:57130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2346,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-x2d9q\" mod_revision:581 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-x2d9q\" > >"}
	{"level":"info","ts":"2024-08-27T22:52:53.827398Z","caller":"traceutil/trace.go:171","msg":"trace[1783336745] transaction","detail":"{read_only:false; response_revision:704; number_of_response:1; }","duration":"204.591624ms","start":"2024-08-27T22:52:53.622782Z","end":"2024-08-27T22:52:53.827373Z","steps":["trace[1783336745] 'process raft request'  (duration: 204.430699ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:52:57.373395Z","caller":"traceutil/trace.go:171","msg":"trace[716131246] linearizableReadLoop","detail":"{readStateIndex:761; appliedIndex:760; }","duration":"149.498374ms","start":"2024-08-27T22:52:57.223878Z","end":"2024-08-27T22:52:57.373376Z","steps":["trace[716131246] 'read index received'  (duration: 149.284642ms)","trace[716131246] 'applied index is now lower than readState.Index'  (duration: 212.994µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-27T22:52:57.373579Z","caller":"traceutil/trace.go:171","msg":"trace[60494434] transaction","detail":"{read_only:false; response_revision:714; number_of_response:1; }","duration":"189.292014ms","start":"2024-08-27T22:52:57.184278Z","end":"2024-08-27T22:52:57.373570Z","steps":["trace[60494434] 'process raft request'  (duration: 188.926601ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-27T22:52:57.373717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.727573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-465478-m03\" ","response":"range_response_count:1 size:3111"}
	{"level":"info","ts":"2024-08-27T22:52:57.373781Z","caller":"traceutil/trace.go:171","msg":"trace[1429209250] range","detail":"{range_begin:/registry/minions/multinode-465478-m03; range_end:; response_count:1; response_revision:714; }","duration":"143.813615ms","start":"2024-08-27T22:52:57.229956Z","end":"2024-08-27T22:52:57.373770Z","steps":["trace[1429209250] 'agreement among raft nodes before linearized reading'  (duration: 143.646181ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-27T22:55:13.856126Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-27T22:55:13.856288Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-465478","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	{"level":"warn","ts":"2024-08-27T22:55:13.856391Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.856479Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.933937Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T22:55:13.933988Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T22:55:13.934061Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28dd8e6bbca035f5","current-leader-member-id":"28dd8e6bbca035f5"}
	{"level":"info","ts":"2024-08-27T22:55:13.936674Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:55:13.936832Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-08-27T22:55:13.936878Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-465478","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"]}
	
	
	==> kernel <==
	 23:01:06 up 11 min,  0 users,  load average: 0.11, 0.10, 0.07
	Linux multinode-465478 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7bd5a7a1c1c9f29dadf8f0e230ba7ffdbcbd0dcbf9d8d0c76ea40c2bb90eb519] <==
	I0827 22:54:32.972602       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:54:42.970836       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:54:42.971019       1 main.go:299] handling current node
	I0827 22:54:42.971067       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:54:42.971089       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:54:42.971298       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:54:42.971329       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:54:52.964015       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:54:52.964139       1 main.go:299] handling current node
	I0827 22:54:52.964181       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:54:52.964200       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:54:52.964402       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:54:52.964432       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:55:02.965422       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:55:02.965463       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:55:02.965615       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:55:02.965636       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	I0827 22:55:02.965690       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:55:02.965696       1 main.go:299] handling current node
	I0827 22:55:12.972844       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 22:55:12.972900       1 main.go:299] handling current node
	I0827 22:55:12.972928       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 22:55:12.972936       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 22:55:12.973149       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0827 22:55:12.973156       1 main.go:322] Node multinode-465478-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f96a3e43f2516b5f7fb0feb3f2391a712059edb4770df48b4486f00f17c06065] <==
	I0827 23:00:03.762021       1 main.go:299] handling current node
	I0827 23:00:13.766032       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:00:13.766105       1 main.go:299] handling current node
	I0827 23:00:13.766140       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:00:13.766146       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 23:00:23.760729       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:00:23.760792       1 main.go:299] handling current node
	I0827 23:00:23.760811       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:00:23.760817       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 23:00:33.761105       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:00:33.761281       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 23:00:33.761496       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:00:33.761529       1 main.go:299] handling current node
	I0827 23:00:43.767685       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:00:43.767807       1 main.go:299] handling current node
	I0827 23:00:43.767840       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:00:43.767849       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 23:00:53.769693       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:00:53.769840       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	I0827 23:00:53.770008       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:00:53.770040       1 main.go:299] handling current node
	I0827 23:01:03.761317       1 main.go:295] Handling node with IPs: map[192.168.39.203:{}]
	I0827 23:01:03.761371       1 main.go:299] handling current node
	I0827 23:01:03.761386       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0827 23:01:03.761392       1 main.go:322] Node multinode-465478-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0a1c64bb0ada0121724c4d33f4f230b3ad1f43b9abbbf274779bd4788785d20e] <==
	I0827 22:57:01.304655       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 22:57:01.304793       1 policy_source.go:224] refreshing policies
	I0827 22:57:01.305520       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 22:57:01.313791       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 22:57:01.314180       1 aggregator.go:171] initial CRD sync complete...
	I0827 22:57:01.314212       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 22:57:01.314370       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 22:57:01.314390       1 cache.go:39] Caches are synced for autoregister controller
	I0827 22:57:01.314508       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0827 22:57:01.315068       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 22:57:01.315791       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 22:57:01.315819       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 22:57:01.318768       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 22:57:01.391594       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 22:57:01.403787       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 22:57:01.408508       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 22:57:01.418805       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 22:57:02.216462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0827 22:57:03.482752       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:57:03.632917       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:57:03.649372       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:57:03.723894       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 22:57:03.731917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 22:57:04.878639       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 22:57:05.040639       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [30e40e98f2f390a19b45796f760e80b4894e2c00f0630447cf263b1ccc69a0d1] <==
	I0827 22:50:12.167650       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 22:50:13.042663       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 22:50:13.055818       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0827 22:50:13.063771       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 22:50:17.771489       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0827 22:50:17.869871       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0827 22:51:27.096043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51634: use of closed network connection
	E0827 22:51:27.280742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51650: use of closed network connection
	E0827 22:51:27.617118       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51678: use of closed network connection
	E0827 22:51:27.779608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51682: use of closed network connection
	E0827 22:51:27.942405       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51700: use of closed network connection
	E0827 22:51:28.208710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51728: use of closed network connection
	E0827 22:51:28.367660       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51740: use of closed network connection
	E0827 22:51:28.544750       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51754: use of closed network connection
	E0827 22:51:28.717446       1 conn.go:339] Error on socket receive: read tcp 192.168.39.203:8443->192.168.39.1:51778: use of closed network connection
	I0827 22:55:13.848587       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0827 22:55:13.862910       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869536       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869611       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869656       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869687       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869733       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869765       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.869798       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0827 22:55:13.880957       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [855e93985a2f5f009d53490e8bcc5cbd0faf2ac9e26bff9dd8ce8c6a15beeda3] <==
	I0827 22:52:49.035463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:49.267719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:52:49.268547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.490155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:52:50.490320       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-465478-m03\" does not exist"
	I0827 22:52:50.513621       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-465478-m03" podCIDRs=["10.244.3.0/24"]
	I0827 22:52:50.513811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.513943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:50.756083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:51.075668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:52:52.004835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:00.873993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:08.687179       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:53:08.688327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:08.697925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:11.968174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:46.986493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:53:46.989814       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m03"
	I0827 22:53:47.016693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:53:47.057800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.478195ms"
	I0827 22:53:47.058836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.569µs"
	I0827 22:53:52.058799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:52.075322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:53:52.091619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:54:02.165556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	
	
	==> kube-controller-manager [d2654e5707701fcb082a1ed5f1624af6fa0fda8acaa8735dfe66021fa1e83f05] <==
	I0827 22:58:20.648182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:20.665187       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-465478-m03" podCIDRs=["10.244.2.0/24"]
	I0827 22:58:20.665266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:20.665293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:20.980921       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:21.334376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:24.775118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:30.730448       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:39.923619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:39.923867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:39.941769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:44.375258       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:44.392504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:44.694568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:58:44.826824       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-465478-m02"
	I0827 22:58:44.827051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m03"
	I0827 22:59:24.713536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:59:24.731996       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:59:24.743769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.1291ms"
	I0827 22:59:24.744444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.969µs"
	I0827 22:59:29.789545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-465478-m02"
	I0827 22:59:44.666334       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-gmcnn"
	I0827 22:59:44.691076       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-gmcnn"
	I0827 22:59:44.691115       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rpnjq"
	I0827 22:59:44.710349       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rpnjq"
	
	
	==> kube-proxy [827bc3f7e563106d32c709bcf98fe59b0456abab821d3dd901ccf928feee4499] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:50:18.895872       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:50:18.921433       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0827 22:50:18.921656       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:50:18.951368       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:50:18.951452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:50:18.951492       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:50:18.953840       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:50:18.954191       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:50:18.954280       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:50:18.955554       1 config.go:197] "Starting service config controller"
	I0827 22:50:18.955600       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:50:18.955634       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:50:18.955650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:50:18.956143       1 config.go:326] "Starting node config controller"
	I0827 22:50:18.957825       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:50:19.057396       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:50:19.057506       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:50:19.058831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [de730b11023bb45ff130812d9e5eb6e5a456ef02e50b58654b9df068651dd7d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 22:57:03.086629       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 22:57:03.103688       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E0827 22:57:03.104510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:57:03.177882       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 22:57:03.177926       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 22:57:03.177954       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:57:03.180981       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:57:03.181367       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:57:03.181617       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:57:03.184368       1 config.go:197] "Starting service config controller"
	I0827 22:57:03.184505       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:57:03.187734       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:57:03.187742       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:57:03.188432       1 config.go:326] "Starting node config controller"
	I0827 22:57:03.188441       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:57:03.288395       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:57:03.288581       1 shared_informer.go:320] Caches are synced for node config
	I0827 22:57:03.288192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2597b46782de63af33542fe7a50d63529942c7248d568ba68914a7820be5b2b6] <==
	E0827 22:50:10.234777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:10.234525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:10.234820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.078537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:11.078600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.079430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 22:50:11.079467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.158134       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:50:11.158626       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0827 22:50:11.189258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 22:50:11.189303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.210837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 22:50:11.210951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.283763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 22:50:11.284000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.288715       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0827 22:50:11.288790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.352004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:50:11.352184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.371553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 22:50:11.371603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:50:11.482913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 22:50:11.483347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0827 22:50:14.214609       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 22:55:13.850359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3e6dcde1fec1d3e908c8a7c3f2dda0b4b5718f7110d70fe6e8546bdd67502adb] <==
	I0827 22:57:00.478586       1 serving.go:386] Generated self-signed cert in-memory
	W0827 22:57:01.254047       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 22:57:01.254142       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 22:57:01.254184       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 22:57:01.254211       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 22:57:01.331387       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 22:57:01.331412       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:57:01.336310       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 22:57:01.336527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 22:57:01.336566       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 22:57:01.336585       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 22:57:01.436907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 22:59:48 multinode-465478 kubelet[3073]: E0827 22:59:48.297286    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799588296632344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:59:58 multinode-465478 kubelet[3073]: E0827 22:59:58.243366    3073 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 22:59:58 multinode-465478 kubelet[3073]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 22:59:58 multinode-465478 kubelet[3073]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 22:59:58 multinode-465478 kubelet[3073]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 22:59:58 multinode-465478 kubelet[3073]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 22:59:58 multinode-465478 kubelet[3073]: E0827 22:59:58.298867    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799598298373722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 22:59:58 multinode-465478 kubelet[3073]: E0827 22:59:58.298909    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799598298373722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:08 multinode-465478 kubelet[3073]: E0827 23:00:08.300371    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799608300067662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:08 multinode-465478 kubelet[3073]: E0827 23:00:08.300412    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799608300067662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:18 multinode-465478 kubelet[3073]: E0827 23:00:18.302134    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799618301895602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:18 multinode-465478 kubelet[3073]: E0827 23:00:18.302175    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799618301895602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:28 multinode-465478 kubelet[3073]: E0827 23:00:28.303948    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799628303658665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:28 multinode-465478 kubelet[3073]: E0827 23:00:28.303990    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799628303658665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:38 multinode-465478 kubelet[3073]: E0827 23:00:38.306012    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799638305610903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:38 multinode-465478 kubelet[3073]: E0827 23:00:38.306657    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799638305610903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:48 multinode-465478 kubelet[3073]: E0827 23:00:48.312570    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799648307849657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:48 multinode-465478 kubelet[3073]: E0827 23:00:48.312612    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799648307849657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:58 multinode-465478 kubelet[3073]: E0827 23:00:58.247975    3073 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 27 23:00:58 multinode-465478 kubelet[3073]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 27 23:00:58 multinode-465478 kubelet[3073]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 27 23:00:58 multinode-465478 kubelet[3073]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 27 23:00:58 multinode-465478 kubelet[3073]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 27 23:00:58 multinode-465478 kubelet[3073]: E0827 23:00:58.315413    3073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799658314705799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:00:58 multinode-465478 kubelet[3073]: E0827 23:00:58.315458    3073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724799658314705799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:01:05.720126   49272 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19522-7571/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-465478 -n multinode-465478
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-465478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.41s)

                                                
                                    
x
+
TestPreload (310.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-594382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0827 23:06:21.248247   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-594382 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.33589132s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-594382 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-594382 image pull gcr.io/k8s-minikube/busybox: (3.425134263s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-594382
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-594382: (1m49.749689285s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-594382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-594382 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.126778681s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-594382 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-27 23:10:12.574301008 +0000 UTC m=+5569.291864813
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-594382 -n test-preload-594382
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-594382 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-594382 logs -n 25: (1.020962722s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478 sudo cat                                       | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt                       | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m02:/home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n                                                                 | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | multinode-465478-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-465478 ssh -n multinode-465478-m02 sudo cat                                   | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	|         | /home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-465478 node stop m03                                                          | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:52 UTC |
	| node    | multinode-465478 node start                                                             | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:52 UTC | 27 Aug 24 22:53 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| stop    | -p multinode-465478                                                                     | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC |                     |
	| start   | -p multinode-465478                                                                     | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:55 UTC | 27 Aug 24 22:58 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC |                     |
	| node    | multinode-465478 node delete                                                            | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC | 27 Aug 24 22:58 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-465478 stop                                                                   | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 22:58 UTC |                     |
	| start   | -p multinode-465478                                                                     | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:04 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-465478                                                                | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 23:04 UTC |                     |
	| start   | -p multinode-465478-m02                                                                 | multinode-465478-m02 | jenkins | v1.33.1 | 27 Aug 24 23:04 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-465478-m03                                                                 | multinode-465478-m03 | jenkins | v1.33.1 | 27 Aug 24 23:04 UTC | 27 Aug 24 23:05 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-465478                                                                 | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 23:05 UTC |                     |
	| delete  | -p multinode-465478-m03                                                                 | multinode-465478-m03 | jenkins | v1.33.1 | 27 Aug 24 23:05 UTC | 27 Aug 24 23:05 UTC |
	| delete  | -p multinode-465478                                                                     | multinode-465478     | jenkins | v1.33.1 | 27 Aug 24 23:05 UTC | 27 Aug 24 23:05 UTC |
	| start   | -p test-preload-594382                                                                  | test-preload-594382  | jenkins | v1.33.1 | 27 Aug 24 23:05 UTC | 27 Aug 24 23:07 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-594382 image pull                                                          | test-preload-594382  | jenkins | v1.33.1 | 27 Aug 24 23:07 UTC | 27 Aug 24 23:07 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-594382                                                                  | test-preload-594382  | jenkins | v1.33.1 | 27 Aug 24 23:07 UTC | 27 Aug 24 23:09 UTC |
	| start   | -p test-preload-594382                                                                  | test-preload-594382  | jenkins | v1.33.1 | 27 Aug 24 23:09 UTC | 27 Aug 24 23:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-594382 image list                                                          | test-preload-594382  | jenkins | v1.33.1 | 27 Aug 24 23:10 UTC | 27 Aug 24 23:10 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:09:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:09:04.265563   52129 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:09:04.265705   52129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:09:04.265716   52129 out.go:358] Setting ErrFile to fd 2...
	I0827 23:09:04.265722   52129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:09:04.265910   52129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:09:04.266456   52129 out.go:352] Setting JSON to false
	I0827 23:09:04.267398   52129 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6691,"bootTime":1724793453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:09:04.267451   52129 start.go:139] virtualization: kvm guest
	I0827 23:09:04.269689   52129 out.go:177] * [test-preload-594382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:09:04.271173   52129 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:09:04.271179   52129 notify.go:220] Checking for updates...
	I0827 23:09:04.272616   52129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:09:04.273809   52129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:09:04.275133   52129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:09:04.276526   52129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:09:04.277828   52129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:09:04.279426   52129 config.go:182] Loaded profile config "test-preload-594382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0827 23:09:04.279816   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:09:04.279860   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:09:04.294950   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0827 23:09:04.295417   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:09:04.295939   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:09:04.295958   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:09:04.296281   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:09:04.296450   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:04.298418   52129 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 23:09:04.299588   52129 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:09:04.299970   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:09:04.300013   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:09:04.314572   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0827 23:09:04.315005   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:09:04.315449   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:09:04.315466   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:09:04.315743   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:09:04.315891   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:04.350434   52129 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 23:09:04.351701   52129 start.go:297] selected driver: kvm2
	I0827 23:09:04.351716   52129 start.go:901] validating driver "kvm2" against &{Name:test-preload-594382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-594382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:09:04.351814   52129 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:09:04.352483   52129 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:09:04.352566   52129 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:09:04.366908   52129 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:09:04.367209   52129 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:09:04.367274   52129 cni.go:84] Creating CNI manager for ""
	I0827 23:09:04.367287   52129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:09:04.367336   52129 start.go:340] cluster config:
	{Name:test-preload-594382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-594382 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:09:04.367448   52129 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:09:04.369300   52129 out.go:177] * Starting "test-preload-594382" primary control-plane node in "test-preload-594382" cluster
	I0827 23:09:04.370372   52129 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0827 23:09:04.838317   52129 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0827 23:09:04.838352   52129 cache.go:56] Caching tarball of preloaded images
	I0827 23:09:04.838555   52129 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0827 23:09:04.840539   52129 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0827 23:09:04.841819   52129 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0827 23:09:04.940792   52129 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0827 23:09:16.466969   52129 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0827 23:09:16.467100   52129 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0827 23:09:17.305121   52129 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0827 23:09:17.305251   52129 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/config.json ...
	I0827 23:09:17.305500   52129 start.go:360] acquireMachinesLock for test-preload-594382: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:09:17.305564   52129 start.go:364] duration metric: took 41.474µs to acquireMachinesLock for "test-preload-594382"
	I0827 23:09:17.305590   52129 start.go:96] Skipping create...Using existing machine configuration
	I0827 23:09:17.305609   52129 fix.go:54] fixHost starting: 
	I0827 23:09:17.305919   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:09:17.305959   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:09:17.320712   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36221
	I0827 23:09:17.321165   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:09:17.321624   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:09:17.321645   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:09:17.321941   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:09:17.322114   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:17.322271   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetState
	I0827 23:09:17.323926   52129 fix.go:112] recreateIfNeeded on test-preload-594382: state=Stopped err=<nil>
	I0827 23:09:17.323967   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	W0827 23:09:17.324125   52129 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 23:09:17.327115   52129 out.go:177] * Restarting existing kvm2 VM for "test-preload-594382" ...
	I0827 23:09:17.328695   52129 main.go:141] libmachine: (test-preload-594382) Calling .Start
	I0827 23:09:17.328908   52129 main.go:141] libmachine: (test-preload-594382) Ensuring networks are active...
	I0827 23:09:17.329639   52129 main.go:141] libmachine: (test-preload-594382) Ensuring network default is active
	I0827 23:09:17.329947   52129 main.go:141] libmachine: (test-preload-594382) Ensuring network mk-test-preload-594382 is active
	I0827 23:09:17.330250   52129 main.go:141] libmachine: (test-preload-594382) Getting domain xml...
	I0827 23:09:17.330956   52129 main.go:141] libmachine: (test-preload-594382) Creating domain...
	I0827 23:09:18.512522   52129 main.go:141] libmachine: (test-preload-594382) Waiting to get IP...
	I0827 23:09:18.513514   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:18.513872   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:18.513950   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:18.513873   52212 retry.go:31] will retry after 311.342504ms: waiting for machine to come up
	I0827 23:09:18.826480   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:18.826984   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:18.827013   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:18.826937   52212 retry.go:31] will retry after 373.528747ms: waiting for machine to come up
	I0827 23:09:19.202695   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:19.203163   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:19.203181   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:19.203133   52212 retry.go:31] will retry after 463.932071ms: waiting for machine to come up
	I0827 23:09:19.668729   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:19.669091   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:19.669116   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:19.669057   52212 retry.go:31] will retry after 456.733769ms: waiting for machine to come up
	I0827 23:09:20.127834   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:20.128296   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:20.128321   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:20.128233   52212 retry.go:31] will retry after 738.574956ms: waiting for machine to come up
	I0827 23:09:20.868135   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:20.868564   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:20.868603   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:20.868502   52212 retry.go:31] will retry after 585.184352ms: waiting for machine to come up
	I0827 23:09:21.455187   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:21.455619   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:21.455640   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:21.455572   52212 retry.go:31] will retry after 955.706492ms: waiting for machine to come up
	I0827 23:09:22.412973   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:22.413444   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:22.413468   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:22.413396   52212 retry.go:31] will retry after 1.157380818s: waiting for machine to come up
	I0827 23:09:23.571972   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:23.572421   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:23.572441   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:23.572386   52212 retry.go:31] will retry after 1.453445758s: waiting for machine to come up
	I0827 23:09:25.027161   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:25.027678   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:25.027715   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:25.027631   52212 retry.go:31] will retry after 1.902229446s: waiting for machine to come up
	I0827 23:09:26.930946   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:26.931455   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:26.931521   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:26.931441   52212 retry.go:31] will retry after 2.735979632s: waiting for machine to come up
	I0827 23:09:29.670868   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:29.671321   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:29.671354   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:29.671273   52212 retry.go:31] will retry after 2.413865875s: waiting for machine to come up
	I0827 23:09:32.087750   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:32.088142   52129 main.go:141] libmachine: (test-preload-594382) DBG | unable to find current IP address of domain test-preload-594382 in network mk-test-preload-594382
	I0827 23:09:32.088183   52129 main.go:141] libmachine: (test-preload-594382) DBG | I0827 23:09:32.088106   52212 retry.go:31] will retry after 3.334527072s: waiting for machine to come up
	I0827 23:09:35.425866   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.426312   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has current primary IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.426347   52129 main.go:141] libmachine: (test-preload-594382) Found IP for machine: 192.168.39.25
	I0827 23:09:35.426366   52129 main.go:141] libmachine: (test-preload-594382) Reserving static IP address...
	I0827 23:09:35.426769   52129 main.go:141] libmachine: (test-preload-594382) Reserved static IP address: 192.168.39.25
	I0827 23:09:35.426791   52129 main.go:141] libmachine: (test-preload-594382) Waiting for SSH to be available...
	I0827 23:09:35.426815   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "test-preload-594382", mac: "52:54:00:a3:c1:cb", ip: "192.168.39.25"} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.426841   52129 main.go:141] libmachine: (test-preload-594382) DBG | skip adding static IP to network mk-test-preload-594382 - found existing host DHCP lease matching {name: "test-preload-594382", mac: "52:54:00:a3:c1:cb", ip: "192.168.39.25"}
	I0827 23:09:35.426855   52129 main.go:141] libmachine: (test-preload-594382) DBG | Getting to WaitForSSH function...
	I0827 23:09:35.428815   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.429083   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.429113   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.429198   52129 main.go:141] libmachine: (test-preload-594382) DBG | Using SSH client type: external
	I0827 23:09:35.429225   52129 main.go:141] libmachine: (test-preload-594382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa (-rw-------)
	I0827 23:09:35.429265   52129 main.go:141] libmachine: (test-preload-594382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 23:09:35.429284   52129 main.go:141] libmachine: (test-preload-594382) DBG | About to run SSH command:
	I0827 23:09:35.429298   52129 main.go:141] libmachine: (test-preload-594382) DBG | exit 0
	I0827 23:09:35.556568   52129 main.go:141] libmachine: (test-preload-594382) DBG | SSH cmd err, output: <nil>: 
	I0827 23:09:35.556958   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetConfigRaw
	I0827 23:09:35.557620   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetIP
	I0827 23:09:35.559955   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.560243   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.560276   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.560472   52129 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/config.json ...
	I0827 23:09:35.560722   52129 machine.go:93] provisionDockerMachine start ...
	I0827 23:09:35.560743   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:35.560980   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:35.563249   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.563658   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.563707   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.563805   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:35.563959   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.564110   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.564260   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:35.564460   52129 main.go:141] libmachine: Using SSH client type: native
	I0827 23:09:35.564686   52129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0827 23:09:35.564698   52129 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:09:35.672411   52129 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0827 23:09:35.672457   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetMachineName
	I0827 23:09:35.672724   52129 buildroot.go:166] provisioning hostname "test-preload-594382"
	I0827 23:09:35.672752   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetMachineName
	I0827 23:09:35.672897   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:35.675639   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.675957   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.675984   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.676204   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:35.676388   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.676579   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.676707   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:35.676863   52129 main.go:141] libmachine: Using SSH client type: native
	I0827 23:09:35.677043   52129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0827 23:09:35.677069   52129 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-594382 && echo "test-preload-594382" | sudo tee /etc/hostname
	I0827 23:09:35.797247   52129 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-594382
	
	I0827 23:09:35.797271   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:35.799905   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.800246   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.800277   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.800457   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:35.800651   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.800801   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:35.800929   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:35.801087   52129 main.go:141] libmachine: Using SSH client type: native
	I0827 23:09:35.801280   52129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0827 23:09:35.801297   52129 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-594382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-594382/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-594382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:09:35.916584   52129 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:09:35.916617   52129 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 23:09:35.916731   52129 buildroot.go:174] setting up certificates
	I0827 23:09:35.916743   52129 provision.go:84] configureAuth start
	I0827 23:09:35.916757   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetMachineName
	I0827 23:09:35.917038   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetIP
	I0827 23:09:35.919521   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.919982   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.920025   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.920128   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:35.922591   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.922923   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:35.922951   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:35.923119   52129 provision.go:143] copyHostCerts
	I0827 23:09:35.923181   52129 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 23:09:35.923201   52129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 23:09:35.923286   52129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 23:09:35.923389   52129 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 23:09:35.923400   52129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 23:09:35.923439   52129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 23:09:35.923513   52129 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 23:09:35.923523   52129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 23:09:35.923554   52129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 23:09:35.923634   52129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.test-preload-594382 san=[127.0.0.1 192.168.39.25 localhost minikube test-preload-594382]
	I0827 23:09:36.126018   52129 provision.go:177] copyRemoteCerts
	I0827 23:09:36.126074   52129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:09:36.126096   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.128672   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.128999   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.129026   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.129185   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.129389   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.129533   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.129705   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:09:36.214121   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 23:09:36.236262   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0827 23:09:36.257940   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:09:36.279197   52129 provision.go:87] duration metric: took 362.444158ms to configureAuth
	I0827 23:09:36.279222   52129 buildroot.go:189] setting minikube options for container-runtime
	I0827 23:09:36.279386   52129 config.go:182] Loaded profile config "test-preload-594382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0827 23:09:36.279462   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.282218   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.282624   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.282655   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.282836   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.283082   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.283278   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.283424   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.283592   52129 main.go:141] libmachine: Using SSH client type: native
	I0827 23:09:36.283751   52129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0827 23:09:36.283766   52129 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 23:09:36.502265   52129 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 23:09:36.502291   52129 machine.go:96] duration metric: took 941.555073ms to provisionDockerMachine
	I0827 23:09:36.502308   52129 start.go:293] postStartSetup for "test-preload-594382" (driver="kvm2")
	I0827 23:09:36.502321   52129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:09:36.502341   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:36.502668   52129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:09:36.502693   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.505349   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.505688   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.505719   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.505825   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.505982   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.506124   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.506254   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:09:36.591215   52129 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:09:36.595348   52129 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 23:09:36.595379   52129 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 23:09:36.595446   52129 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 23:09:36.595538   52129 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 23:09:36.595636   52129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:09:36.604681   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:09:36.629209   52129 start.go:296] duration metric: took 126.888576ms for postStartSetup
	I0827 23:09:36.629255   52129 fix.go:56] duration metric: took 19.323654223s for fixHost
	I0827 23:09:36.629278   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.631921   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.632225   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.632251   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.632392   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.632599   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.632762   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.632901   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.633063   52129 main.go:141] libmachine: Using SSH client type: native
	I0827 23:09:36.633278   52129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.25 22 <nil> <nil>}
	I0827 23:09:36.633294   52129 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 23:09:36.740769   52129 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724800176.718503480
	
	I0827 23:09:36.740810   52129 fix.go:216] guest clock: 1724800176.718503480
	I0827 23:09:36.740824   52129 fix.go:229] Guest: 2024-08-27 23:09:36.71850348 +0000 UTC Remote: 2024-08-27 23:09:36.629260586 +0000 UTC m=+32.400424469 (delta=89.242894ms)
	I0827 23:09:36.740873   52129 fix.go:200] guest clock delta is within tolerance: 89.242894ms
	I0827 23:09:36.740883   52129 start.go:83] releasing machines lock for "test-preload-594382", held for 19.435304751s
	I0827 23:09:36.740913   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:36.741180   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetIP
	I0827 23:09:36.743457   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.743850   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.743879   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.744044   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:36.744568   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:36.744732   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:09:36.744812   52129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:09:36.744846   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.744957   52129 ssh_runner.go:195] Run: cat /version.json
	I0827 23:09:36.744975   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:09:36.747307   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.747635   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.747659   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.747678   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.747856   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.747998   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.748112   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:36.748133   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:36.748141   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.748280   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:09:36.748305   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:09:36.748458   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:09:36.748617   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:09:36.748762   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:09:36.862011   52129 ssh_runner.go:195] Run: systemctl --version
	I0827 23:09:36.867836   52129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 23:09:37.013321   52129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 23:09:37.018881   52129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 23:09:37.018938   52129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:09:37.034995   52129 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 23:09:37.035017   52129 start.go:495] detecting cgroup driver to use...
	I0827 23:09:37.035074   52129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 23:09:37.050473   52129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 23:09:37.064335   52129 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:09:37.064383   52129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:09:37.077631   52129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:09:37.090942   52129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:09:37.208318   52129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:09:37.375484   52129 docker.go:233] disabling docker service ...
	I0827 23:09:37.375544   52129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:09:37.389189   52129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:09:37.401947   52129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:09:37.513079   52129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:09:37.621209   52129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:09:37.634339   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:09:37.650829   52129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0827 23:09:37.650883   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.660409   52129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 23:09:37.660503   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.670058   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.680164   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.690696   52129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:09:37.701411   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.711656   52129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.727635   52129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:09:37.737471   52129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:09:37.746325   52129 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 23:09:37.746377   52129 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 23:09:37.759134   52129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:09:37.768267   52129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:09:37.876588   52129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 23:09:37.965905   52129 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 23:09:37.965971   52129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 23:09:37.970380   52129 start.go:563] Will wait 60s for crictl version
	I0827 23:09:37.970439   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:37.973864   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:09:38.011101   52129 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 23:09:38.011173   52129 ssh_runner.go:195] Run: crio --version
	I0827 23:09:38.038254   52129 ssh_runner.go:195] Run: crio --version
	I0827 23:09:38.067188   52129 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0827 23:09:38.068718   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetIP
	I0827 23:09:38.071853   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:38.072275   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:09:38.072297   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:09:38.072541   52129 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0827 23:09:38.076599   52129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:09:38.088777   52129 kubeadm.go:883] updating cluster {Name:test-preload-594382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-594382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:09:38.088887   52129 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0827 23:09:38.088925   52129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:09:38.128215   52129 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0827 23:09:38.128304   52129 ssh_runner.go:195] Run: which lz4
	I0827 23:09:38.136164   52129 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 23:09:38.140355   52129 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 23:09:38.140392   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0827 23:09:39.556871   52129 crio.go:462] duration metric: took 1.420742044s to copy over tarball
	I0827 23:09:39.556939   52129 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 23:09:41.858088   52129 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30111438s)
	I0827 23:09:41.858123   52129 crio.go:469] duration metric: took 2.301224988s to extract the tarball
	I0827 23:09:41.858134   52129 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 23:09:41.898347   52129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:09:41.941947   52129 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0827 23:09:41.941971   52129 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0827 23:09:41.942036   52129 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:09:41.942083   52129 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:41.942104   52129 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:41.942131   52129 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0827 23:09:41.942081   52129 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:41.942171   52129 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:41.942104   52129 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:41.942043   52129 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:41.943599   52129 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:41.943611   52129 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:41.943621   52129 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:41.943630   52129 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:41.943637   52129 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0827 23:09:41.943643   52129 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:41.943660   52129 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:09:41.943696   52129 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.174198   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.191755   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:42.208608   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:42.216697   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:42.218794   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0827 23:09:42.231409   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:42.240275   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:42.243106   52129 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0827 23:09:42.243146   52129 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.243185   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.302300   52129 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0827 23:09:42.302350   52129 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:42.302298   52129 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0827 23:09:42.302441   52129 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:42.302410   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.302497   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.309186   52129 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0827 23:09:42.309216   52129 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0827 23:09:42.309259   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.362794   52129 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0827 23:09:42.362836   52129 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:42.362874   52129 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0827 23:09:42.362891   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.362886   52129 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0827 23:09:42.362906   52129 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:42.362920   52129 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:42.362952   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.362952   52129 ssh_runner.go:195] Run: which crictl
	I0827 23:09:42.363005   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:42.362954   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.363071   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:42.363119   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0827 23:09:42.386826   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:42.462758   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:42.462801   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:42.462822   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:42.462891   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:42.462903   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.462953   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0827 23:09:42.486561   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:42.607853   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0827 23:09:42.607953   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:42.607973   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0827 23:09:42.609311   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0827 23:09:42.610171   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:42.693364   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0827 23:09:42.693399   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0827 23:09:42.693459   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0827 23:09:42.693484   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0827 23:09:42.693539   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0827 23:09:42.693550   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0827 23:09:42.698458   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0827 23:09:42.698469   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0827 23:09:42.698545   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0827 23:09:42.708523   52129 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0827 23:09:42.773565   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0827 23:09:42.773663   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0827 23:09:42.788132   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0827 23:09:42.788173   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0827 23:09:42.788185   52129 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0827 23:09:42.788227   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0827 23:09:42.788271   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0827 23:09:42.788227   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0827 23:09:42.788228   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0827 23:09:42.796222   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0827 23:09:42.796306   52129 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0827 23:09:42.796322   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0827 23:09:42.796353   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0827 23:09:42.796398   52129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0827 23:09:43.077218   52129 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:09:45.747931   52129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.959675885s)
	I0827 23:09:45.747973   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0827 23:09:45.747985   52129 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.959682931s)
	I0827 23:09:45.747996   52129 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0827 23:09:45.748009   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0827 23:09:45.748046   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0827 23:09:45.748105   52129 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.951693666s)
	I0827 23:09:45.748132   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0827 23:09:45.748148   52129 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.951808799s)
	I0827 23:09:45.748162   52129 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0827 23:09:45.748189   52129 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.67094353s)
	I0827 23:09:46.593350   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0827 23:09:46.593407   52129 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0827 23:09:46.593478   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0827 23:09:46.731765   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0827 23:09:46.731806   52129 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0827 23:09:46.731844   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0827 23:09:48.880629   52129 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.148766608s)
	I0827 23:09:48.880655   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0827 23:09:48.880686   52129 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0827 23:09:48.880735   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0827 23:09:49.221122   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0827 23:09:49.221156   52129 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0827 23:09:49.221197   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0827 23:09:49.969226   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0827 23:09:49.969286   52129 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0827 23:09:49.969356   52129 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0827 23:09:50.711040   52129 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0827 23:09:50.711074   52129 cache_images.go:123] Successfully loaded all cached images
	I0827 23:09:50.711079   52129 cache_images.go:92] duration metric: took 8.769096399s to LoadCachedImages
	I0827 23:09:50.711090   52129 kubeadm.go:934] updating node { 192.168.39.25 8443 v1.24.4 crio true true} ...
	I0827 23:09:50.711189   52129 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-594382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-594382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:09:50.711262   52129 ssh_runner.go:195] Run: crio config
	I0827 23:09:50.755376   52129 cni.go:84] Creating CNI manager for ""
	I0827 23:09:50.755404   52129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:09:50.755422   52129 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:09:50.755452   52129 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.25 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-594382 NodeName:test-preload-594382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 23:09:50.755620   52129 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-594382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:09:50.755705   52129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0827 23:09:50.765296   52129 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:09:50.765359   52129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:09:50.774401   52129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0827 23:09:50.793701   52129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:09:50.809532   52129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0827 23:09:50.826284   52129 ssh_runner.go:195] Run: grep 192.168.39.25	control-plane.minikube.internal$ /etc/hosts
	I0827 23:09:50.830303   52129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:09:50.842149   52129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:09:50.953739   52129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:09:50.980515   52129 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382 for IP: 192.168.39.25
	I0827 23:09:50.980542   52129 certs.go:194] generating shared ca certs ...
	I0827 23:09:50.980563   52129 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:09:50.980776   52129 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 23:09:50.980845   52129 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 23:09:50.980860   52129 certs.go:256] generating profile certs ...
	I0827 23:09:50.980989   52129 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/client.key
	I0827 23:09:50.981102   52129 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/apiserver.key.15fc4463
	I0827 23:09:50.981159   52129 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/proxy-client.key
	I0827 23:09:50.981294   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 23:09:50.981341   52129 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 23:09:50.981355   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:09:50.981385   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 23:09:50.981417   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:09:50.981453   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 23:09:50.981503   52129 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:09:50.982453   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:09:51.019169   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 23:09:51.043512   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:09:51.073995   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:09:51.099497   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0827 23:09:51.142968   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 23:09:51.167631   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:09:51.202100   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:09:51.225417   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 23:09:51.247163   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 23:09:51.268273   52129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:09:51.289227   52129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:09:51.304878   52129 ssh_runner.go:195] Run: openssl version
	I0827 23:09:51.310246   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 23:09:51.320249   52129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 23:09:51.324310   52129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 23:09:51.324364   52129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 23:09:51.329954   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 23:09:51.340350   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 23:09:51.350659   52129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 23:09:51.354716   52129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 23:09:51.354764   52129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 23:09:51.359860   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:09:51.369595   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:09:51.379560   52129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:09:51.383576   52129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:09:51.383632   52129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:09:51.388951   52129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:09:51.398812   52129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:09:51.402982   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 23:09:51.408319   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 23:09:51.413558   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 23:09:51.418909   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 23:09:51.423990   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 23:09:51.429042   52129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 23:09:51.434257   52129 kubeadm.go:392] StartCluster: {Name:test-preload-594382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-594382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:09:51.434352   52129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 23:09:51.434398   52129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:09:51.470060   52129 cri.go:89] found id: ""
	I0827 23:09:51.470124   52129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 23:09:51.479887   52129 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 23:09:51.479906   52129 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 23:09:51.479947   52129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 23:09:51.489026   52129 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:09:51.489536   52129 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-594382" does not appear in /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:09:51.489711   52129 kubeconfig.go:62] /home/jenkins/minikube-integration/19522-7571/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-594382" cluster setting kubeconfig missing "test-preload-594382" context setting]
	I0827 23:09:51.490130   52129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:09:51.490918   52129 kapi.go:59] client config for test-preload-594382: &rest.Config{Host:"https://192.168.39.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 23:09:51.491587   52129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 23:09:51.500891   52129 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.25
	I0827 23:09:51.500925   52129 kubeadm.go:1160] stopping kube-system containers ...
	I0827 23:09:51.500937   52129 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0827 23:09:51.500976   52129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:09:51.535165   52129 cri.go:89] found id: ""
	I0827 23:09:51.535260   52129 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 23:09:51.554701   52129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:09:51.565737   52129 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:09:51.565753   52129 kubeadm.go:157] found existing configuration files:
	
	I0827 23:09:51.565793   52129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:09:51.575702   52129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:09:51.575750   52129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:09:51.585867   52129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:09:51.595616   52129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:09:51.595671   52129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:09:51.605861   52129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:09:51.615290   52129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:09:51.615353   52129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:09:51.625138   52129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:09:51.634444   52129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:09:51.634484   52129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:09:51.644349   52129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 23:09:51.654357   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:51.739399   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:52.320608   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:52.565729   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:52.628483   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:52.702840   52129 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:09:52.702939   52129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:09:53.203319   52129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:09:53.703154   52129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:09:53.722869   52129 api_server.go:72] duration metric: took 1.020036854s to wait for apiserver process to appear ...
	I0827 23:09:53.722901   52129 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:09:53.722929   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:53.723420   52129 api_server.go:269] stopped: https://192.168.39.25:8443/healthz: Get "https://192.168.39.25:8443/healthz": dial tcp 192.168.39.25:8443: connect: connection refused
	I0827 23:09:54.223184   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:54.223739   52129 api_server.go:269] stopped: https://192.168.39.25:8443/healthz: Get "https://192.168.39.25:8443/healthz": dial tcp 192.168.39.25:8443: connect: connection refused
	I0827 23:09:54.723845   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:57.891888   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0827 23:09:57.891936   52129 api_server.go:103] status: https://192.168.39.25:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0827 23:09:57.891948   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:57.921525   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0827 23:09:57.921556   52129 api_server.go:103] status: https://192.168.39.25:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0827 23:09:58.224055   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:58.230695   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0827 23:09:58.230721   52129 api_server.go:103] status: https://192.168.39.25:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0827 23:09:58.723242   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:58.730715   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0827 23:09:58.730747   52129 api_server.go:103] status: https://192.168.39.25:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0827 23:09:59.223315   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:09:59.228484   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I0827 23:09:59.235101   52129 api_server.go:141] control plane version: v1.24.4
	I0827 23:09:59.235130   52129 api_server.go:131] duration metric: took 5.512222524s to wait for apiserver health ...
	I0827 23:09:59.235139   52129 cni.go:84] Creating CNI manager for ""
	I0827 23:09:59.235145   52129 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:09:59.236814   52129 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 23:09:59.238070   52129 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 23:09:59.247963   52129 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 23:09:59.265030   52129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:09:59.265102   52129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 23:09:59.265117   52129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 23:09:59.273944   52129 system_pods.go:59] 7 kube-system pods found
	I0827 23:09:59.273986   52129 system_pods.go:61] "coredns-6d4b75cb6d-7rj8d" [b4812ff5-eeea-421a-ba2c-cf00058ea40b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0827 23:09:59.273992   52129 system_pods.go:61] "etcd-test-preload-594382" [eb61a574-d185-4a31-a884-d5c6cc1d3d14] Running
	I0827 23:09:59.273999   52129 system_pods.go:61] "kube-apiserver-test-preload-594382" [cf5f86a8-d6a8-4e7d-b1f6-1b4404e32b64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0827 23:09:59.274003   52129 system_pods.go:61] "kube-controller-manager-test-preload-594382" [718a791d-b500-4891-93f4-920040f39da5] Running
	I0827 23:09:59.274008   52129 system_pods.go:61] "kube-proxy-6vfc7" [9372e29c-8f48-4573-a970-4344acc7a4ae] Running
	I0827 23:09:59.274018   52129 system_pods.go:61] "kube-scheduler-test-preload-594382" [ba7c151e-b75c-4f90-97a0-e362f9fbcb87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0827 23:09:59.274031   52129 system_pods.go:61] "storage-provisioner" [f2fd95d7-5a52-4a48-8553-2f05e0bb716d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0827 23:09:59.274037   52129 system_pods.go:74] duration metric: took 8.990145ms to wait for pod list to return data ...
	I0827 23:09:59.274046   52129 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:09:59.277047   52129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:09:59.277070   52129 node_conditions.go:123] node cpu capacity is 2
	I0827 23:09:59.277081   52129 node_conditions.go:105] duration metric: took 3.029696ms to run NodePressure ...
	I0827 23:09:59.277098   52129 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:09:59.517582   52129 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0827 23:09:59.523568   52129 kubeadm.go:739] kubelet initialised
	I0827 23:09:59.523589   52129 kubeadm.go:740] duration metric: took 5.983139ms waiting for restarted kubelet to initialise ...
	I0827 23:09:59.523607   52129 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:09:59.530192   52129 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace to be "Ready" ...
	I0827 23:09:59.535914   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.535935   52129 pod_ready.go:82] duration metric: took 5.721787ms for pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace to be "Ready" ...
	E0827 23:09:59.535944   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.535950   52129 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:09:59.541192   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "etcd-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.541213   52129 pod_ready.go:82] duration metric: took 5.254454ms for pod "etcd-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	E0827 23:09:59.541222   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "etcd-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.541231   52129 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:09:59.545574   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "kube-apiserver-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.545593   52129 pod_ready.go:82] duration metric: took 4.35416ms for pod "kube-apiserver-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	E0827 23:09:59.545601   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "kube-apiserver-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.545607   52129 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:09:59.669966   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.669998   52129 pod_ready.go:82] duration metric: took 124.381959ms for pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	E0827 23:09:59.670011   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:09:59.670020   52129 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6vfc7" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:00.074578   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "kube-proxy-6vfc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:00.074608   52129 pod_ready.go:82] duration metric: took 404.575929ms for pod "kube-proxy-6vfc7" in "kube-system" namespace to be "Ready" ...
	E0827 23:10:00.074628   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "kube-proxy-6vfc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:00.074634   52129 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:00.470695   52129 pod_ready.go:98] node "test-preload-594382" hosting pod "kube-scheduler-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:00.470727   52129 pod_ready.go:82] duration metric: took 396.085474ms for pod "kube-scheduler-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	E0827 23:10:00.470741   52129 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-594382" hosting pod "kube-scheduler-test-preload-594382" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:00.470750   52129 pod_ready.go:39] duration metric: took 947.134073ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:10:00.470769   52129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 23:10:00.482437   52129 ops.go:34] apiserver oom_adj: -16
	I0827 23:10:00.482463   52129 kubeadm.go:597] duration metric: took 9.002550161s to restartPrimaryControlPlane
	I0827 23:10:00.482475   52129 kubeadm.go:394] duration metric: took 9.048223762s to StartCluster
	I0827 23:10:00.482494   52129 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:10:00.482563   52129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:10:00.483400   52129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:10:00.483634   52129 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.25 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:10:00.483700   52129 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:10:00.483787   52129 addons.go:69] Setting storage-provisioner=true in profile "test-preload-594382"
	I0827 23:10:00.483822   52129 addons.go:234] Setting addon storage-provisioner=true in "test-preload-594382"
	W0827 23:10:00.483831   52129 addons.go:243] addon storage-provisioner should already be in state true
	I0827 23:10:00.483832   52129 config.go:182] Loaded profile config "test-preload-594382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0827 23:10:00.483837   52129 addons.go:69] Setting default-storageclass=true in profile "test-preload-594382"
	I0827 23:10:00.483858   52129 host.go:66] Checking if "test-preload-594382" exists ...
	I0827 23:10:00.483878   52129 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-594382"
	I0827 23:10:00.484202   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:10:00.484220   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:10:00.484241   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:10:00.484251   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:10:00.485385   52129 out.go:177] * Verifying Kubernetes components...
	I0827 23:10:00.486792   52129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:10:00.499246   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0827 23:10:00.499289   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I0827 23:10:00.499714   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:10:00.499774   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:10:00.500147   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:10:00.500160   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:10:00.500306   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:10:00.500328   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:10:00.500524   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:10:00.500656   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:10:00.500669   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetState
	I0827 23:10:00.501117   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:10:00.501153   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:10:00.502953   52129 kapi.go:59] client config for test-preload-594382: &rest.Config{Host:"https://192.168.39.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/client.crt", KeyFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/profiles/test-preload-594382/client.key", CAFile:"/home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0827 23:10:00.503200   52129 addons.go:234] Setting addon default-storageclass=true in "test-preload-594382"
	W0827 23:10:00.503215   52129 addons.go:243] addon default-storageclass should already be in state true
	I0827 23:10:00.503246   52129 host.go:66] Checking if "test-preload-594382" exists ...
	I0827 23:10:00.503527   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:10:00.503573   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:10:00.516031   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0827 23:10:00.516454   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:10:00.517001   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:10:00.517024   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:10:00.517399   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:10:00.517598   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetState
	I0827 23:10:00.517616   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I0827 23:10:00.518023   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:10:00.518497   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:10:00.518518   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:10:00.518837   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:10:00.519531   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:10:00.519556   52129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:10:00.519606   52129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:10:00.521939   52129 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:10:00.523231   52129 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:10:00.523249   52129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 23:10:00.523274   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:10:00.526340   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:10:00.526811   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:10:00.526836   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:10:00.527055   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:10:00.527229   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:10:00.527382   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:10:00.527507   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:10:00.538350   52129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0827 23:10:00.538832   52129 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:10:00.539318   52129 main.go:141] libmachine: Using API Version  1
	I0827 23:10:00.539362   52129 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:10:00.539799   52129 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:10:00.539978   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetState
	I0827 23:10:00.541585   52129 main.go:141] libmachine: (test-preload-594382) Calling .DriverName
	I0827 23:10:00.541818   52129 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 23:10:00.541840   52129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 23:10:00.541856   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHHostname
	I0827 23:10:00.544660   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:10:00.545105   52129 main.go:141] libmachine: (test-preload-594382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c1:cb", ip: ""} in network mk-test-preload-594382: {Iface:virbr1 ExpiryTime:2024-08-28 00:09:27 +0000 UTC Type:0 Mac:52:54:00:a3:c1:cb Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:test-preload-594382 Clientid:01:52:54:00:a3:c1:cb}
	I0827 23:10:00.545135   52129 main.go:141] libmachine: (test-preload-594382) DBG | domain test-preload-594382 has defined IP address 192.168.39.25 and MAC address 52:54:00:a3:c1:cb in network mk-test-preload-594382
	I0827 23:10:00.545299   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHPort
	I0827 23:10:00.545456   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHKeyPath
	I0827 23:10:00.545617   52129 main.go:141] libmachine: (test-preload-594382) Calling .GetSSHUsername
	I0827 23:10:00.545758   52129 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/test-preload-594382/id_rsa Username:docker}
	I0827 23:10:00.654585   52129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:10:00.669331   52129 node_ready.go:35] waiting up to 6m0s for node "test-preload-594382" to be "Ready" ...
	I0827 23:10:00.767078   52129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:10:00.786396   52129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:10:01.675171   52129 main.go:141] libmachine: Making call to close driver server
	I0827 23:10:01.675204   52129 main.go:141] libmachine: (test-preload-594382) Calling .Close
	I0827 23:10:01.675485   52129 main.go:141] libmachine: Successfully made call to close driver server
	I0827 23:10:01.675501   52129 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 23:10:01.675514   52129 main.go:141] libmachine: Making call to close driver server
	I0827 23:10:01.675523   52129 main.go:141] libmachine: (test-preload-594382) Calling .Close
	I0827 23:10:01.675527   52129 main.go:141] libmachine: (test-preload-594382) DBG | Closing plugin on server side
	I0827 23:10:01.675742   52129 main.go:141] libmachine: (test-preload-594382) DBG | Closing plugin on server side
	I0827 23:10:01.675796   52129 main.go:141] libmachine: Successfully made call to close driver server
	I0827 23:10:01.675811   52129 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 23:10:01.684442   52129 main.go:141] libmachine: Making call to close driver server
	I0827 23:10:01.684458   52129 main.go:141] libmachine: (test-preload-594382) Calling .Close
	I0827 23:10:01.684715   52129 main.go:141] libmachine: Successfully made call to close driver server
	I0827 23:10:01.684734   52129 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 23:10:01.684743   52129 main.go:141] libmachine: Making call to close driver server
	I0827 23:10:01.684751   52129 main.go:141] libmachine: (test-preload-594382) Calling .Close
	I0827 23:10:01.684981   52129 main.go:141] libmachine: (test-preload-594382) DBG | Closing plugin on server side
	I0827 23:10:01.684987   52129 main.go:141] libmachine: Successfully made call to close driver server
	I0827 23:10:01.684999   52129 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 23:10:01.689985   52129 main.go:141] libmachine: Making call to close driver server
	I0827 23:10:01.690002   52129 main.go:141] libmachine: (test-preload-594382) Calling .Close
	I0827 23:10:01.690285   52129 main.go:141] libmachine: Successfully made call to close driver server
	I0827 23:10:01.690300   52129 main.go:141] libmachine: Making call to close connection to plugin binary
	I0827 23:10:01.690312   52129 main.go:141] libmachine: (test-preload-594382) DBG | Closing plugin on server side
	I0827 23:10:01.693095   52129 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0827 23:10:01.694184   52129 addons.go:510] duration metric: took 1.210490937s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0827 23:10:02.673270   52129 node_ready.go:53] node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:05.173365   52129 node_ready.go:53] node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:07.173794   52129 node_ready.go:53] node "test-preload-594382" has status "Ready":"False"
	I0827 23:10:08.673488   52129 node_ready.go:49] node "test-preload-594382" has status "Ready":"True"
	I0827 23:10:08.673513   52129 node_ready.go:38] duration metric: took 8.0041503s for node "test-preload-594382" to be "Ready" ...
	I0827 23:10:08.673522   52129 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:10:08.679733   52129 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:08.685394   52129 pod_ready.go:93] pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:08.685419   52129 pod_ready.go:82] duration metric: took 5.661893ms for pod "coredns-6d4b75cb6d-7rj8d" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:08.685437   52129 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:10.690889   52129 pod_ready.go:103] pod "etcd-test-preload-594382" in "kube-system" namespace has status "Ready":"False"
	I0827 23:10:11.191919   52129 pod_ready.go:93] pod "etcd-test-preload-594382" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:11.191943   52129 pod_ready.go:82] duration metric: took 2.506498358s for pod "etcd-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.191955   52129 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.196443   52129 pod_ready.go:93] pod "kube-apiserver-test-preload-594382" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:11.196479   52129 pod_ready.go:82] duration metric: took 4.515637ms for pod "kube-apiserver-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.196497   52129 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.201878   52129 pod_ready.go:93] pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:11.201901   52129 pod_ready.go:82] duration metric: took 5.395284ms for pod "kube-controller-manager-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.201913   52129 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6vfc7" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.206093   52129 pod_ready.go:93] pod "kube-proxy-6vfc7" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:11.206109   52129 pod_ready.go:82] duration metric: took 4.189822ms for pod "kube-proxy-6vfc7" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.206118   52129 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.474161   52129 pod_ready.go:93] pod "kube-scheduler-test-preload-594382" in "kube-system" namespace has status "Ready":"True"
	I0827 23:10:11.474182   52129 pod_ready.go:82] duration metric: took 268.057966ms for pod "kube-scheduler-test-preload-594382" in "kube-system" namespace to be "Ready" ...
	I0827 23:10:11.474192   52129 pod_ready.go:39] duration metric: took 2.800662112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:10:11.474205   52129 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:10:11.474250   52129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:10:11.487788   52129 api_server.go:72] duration metric: took 11.004124388s to wait for apiserver process to appear ...
	I0827 23:10:11.487812   52129 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:10:11.487831   52129 api_server.go:253] Checking apiserver healthz at https://192.168.39.25:8443/healthz ...
	I0827 23:10:11.492422   52129 api_server.go:279] https://192.168.39.25:8443/healthz returned 200:
	ok
	I0827 23:10:11.493227   52129 api_server.go:141] control plane version: v1.24.4
	I0827 23:10:11.493247   52129 api_server.go:131] duration metric: took 5.427953ms to wait for apiserver health ...
	I0827 23:10:11.493265   52129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:10:11.675876   52129 system_pods.go:59] 7 kube-system pods found
	I0827 23:10:11.675905   52129 system_pods.go:61] "coredns-6d4b75cb6d-7rj8d" [b4812ff5-eeea-421a-ba2c-cf00058ea40b] Running
	I0827 23:10:11.675911   52129 system_pods.go:61] "etcd-test-preload-594382" [eb61a574-d185-4a31-a884-d5c6cc1d3d14] Running
	I0827 23:10:11.675915   52129 system_pods.go:61] "kube-apiserver-test-preload-594382" [cf5f86a8-d6a8-4e7d-b1f6-1b4404e32b64] Running
	I0827 23:10:11.675921   52129 system_pods.go:61] "kube-controller-manager-test-preload-594382" [718a791d-b500-4891-93f4-920040f39da5] Running
	I0827 23:10:11.675925   52129 system_pods.go:61] "kube-proxy-6vfc7" [9372e29c-8f48-4573-a970-4344acc7a4ae] Running
	I0827 23:10:11.675929   52129 system_pods.go:61] "kube-scheduler-test-preload-594382" [ba7c151e-b75c-4f90-97a0-e362f9fbcb87] Running
	I0827 23:10:11.675933   52129 system_pods.go:61] "storage-provisioner" [f2fd95d7-5a52-4a48-8553-2f05e0bb716d] Running
	I0827 23:10:11.675940   52129 system_pods.go:74] duration metric: took 182.669104ms to wait for pod list to return data ...
	I0827 23:10:11.675951   52129 default_sa.go:34] waiting for default service account to be created ...
	I0827 23:10:11.872923   52129 default_sa.go:45] found service account: "default"
	I0827 23:10:11.872950   52129 default_sa.go:55] duration metric: took 196.991995ms for default service account to be created ...
	I0827 23:10:11.872961   52129 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 23:10:12.076313   52129 system_pods.go:86] 7 kube-system pods found
	I0827 23:10:12.076345   52129 system_pods.go:89] "coredns-6d4b75cb6d-7rj8d" [b4812ff5-eeea-421a-ba2c-cf00058ea40b] Running
	I0827 23:10:12.076356   52129 system_pods.go:89] "etcd-test-preload-594382" [eb61a574-d185-4a31-a884-d5c6cc1d3d14] Running
	I0827 23:10:12.076368   52129 system_pods.go:89] "kube-apiserver-test-preload-594382" [cf5f86a8-d6a8-4e7d-b1f6-1b4404e32b64] Running
	I0827 23:10:12.076373   52129 system_pods.go:89] "kube-controller-manager-test-preload-594382" [718a791d-b500-4891-93f4-920040f39da5] Running
	I0827 23:10:12.076378   52129 system_pods.go:89] "kube-proxy-6vfc7" [9372e29c-8f48-4573-a970-4344acc7a4ae] Running
	I0827 23:10:12.076382   52129 system_pods.go:89] "kube-scheduler-test-preload-594382" [ba7c151e-b75c-4f90-97a0-e362f9fbcb87] Running
	I0827 23:10:12.076388   52129 system_pods.go:89] "storage-provisioner" [f2fd95d7-5a52-4a48-8553-2f05e0bb716d] Running
	I0827 23:10:12.076397   52129 system_pods.go:126] duration metric: took 203.429316ms to wait for k8s-apps to be running ...
	I0827 23:10:12.076407   52129 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 23:10:12.076459   52129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:10:12.090757   52129 system_svc.go:56] duration metric: took 14.341673ms WaitForService to wait for kubelet
	I0827 23:10:12.090803   52129 kubeadm.go:582] duration metric: took 11.607140906s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:10:12.090826   52129 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:10:12.274512   52129 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:10:12.274546   52129 node_conditions.go:123] node cpu capacity is 2
	I0827 23:10:12.274559   52129 node_conditions.go:105] duration metric: took 183.72816ms to run NodePressure ...
	I0827 23:10:12.274570   52129 start.go:241] waiting for startup goroutines ...
	I0827 23:10:12.274576   52129 start.go:246] waiting for cluster config update ...
	I0827 23:10:12.274586   52129 start.go:255] writing updated cluster config ...
	I0827 23:10:12.274822   52129 ssh_runner.go:195] Run: rm -f paused
	I0827 23:10:12.320188   52129 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0827 23:10:12.321951   52129 out.go:201] 
	W0827 23:10:12.322962   52129 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0827 23:10:12.324081   52129 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0827 23:10:12.325371   52129 out.go:177] * Done! kubectl is now configured to use "test-preload-594382" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.219624557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800213219598774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e953300-e400-420f-9ab0-b2c725ecdf4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.220180699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34fc8904-7dc1-47e1-82c8-b8ba416e3b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.220242239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34fc8904-7dc1-47e1-82c8-b8ba416e3b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.220409749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f3bdb96ea038e4a9bfc3447aa3080168656b9126298cf0f2d789db10532e57c,PodSandboxId:25d486e7f86dfa88de22b8ef13f45d252f3815609520dd850f408fccaf427af6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724800206897682156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7rj8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4812ff5-eeea-421a-ba2c-cf00058ea40b,},Annotations:map[string]string{io.kubernetes.container.hash: db014d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be133e5e395f6469f0e53bf94d6bf60c12fe350d53a206bd6465b4efa060a6c1,PodSandboxId:3682daa6aa4841ae854adbf852b8f7b250d32462163df547a93956e57aef51bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724800199673545937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vfc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9372e29c-8f48-4573-a970-4344acc7a4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e11d095c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42644d1c6d8e627a73fd82ee1b137384c691553a5ab2853bedf4e7b1d68d1a0b,PodSandboxId:d7810dd56c93a01e9dc2b1c486fe8fc68e9d87aa4225ceba87af2ba12c091e20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724800199444007835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
fd95d7-5a52-4a48-8553-2f05e0bb716d,},Annotations:map[string]string{io.kubernetes.container.hash: f0d4368,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66a9b594ebef5c6a12086f48ad486db62b11d3f35979a8e26e068c075b49976,PodSandboxId:84f509a3103efc9742c06be2c469c1254737c0c606bcf589cb08149fbbd158ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724800193473770710,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6969550993a124619bb7b24f2414dfc0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 67c0850b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8af9e0cd571bbb9fcdab7b84d9abd7b82e3e4361f62d310e0f8cb63a44adc83,PodSandboxId:b9c18d05ad495229fe350291de62db9b46106862debd6dc29f00ec59d9a45fd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724800193409575137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930311ce660dd046084a84813793e6a8,},Annotations:map[
string]string{io.kubernetes.container.hash: 73e453e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052358546b7b7c505719487291b6f91504fdf720084a5e0125420168ae815054,PodSandboxId:c6f2f3cd59bb7bcf00bee7fb2c51c4de3d67e72eed57f18e6edb290986531b22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724800193423511242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b53844351454f3b974dbc7224af95a,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c301ec776aec3a9fdf349d5aac3eedee8aa8faaa63646a670c8a66ae858b8030,PodSandboxId:7e65ed946bdc9005f4f9ccbd5e8cdc31c14948d4dfe21b572f18ba7d20f4c0b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724800193359475515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f067e8e31f4b46ab502c4df795acca3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34fc8904-7dc1-47e1-82c8-b8ba416e3b06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.259635205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8f9cc80-d8bf-40a9-ad5b-d895531e85ce name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.259709409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8f9cc80-d8bf-40a9-ad5b-d895531e85ce name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.260689238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9f36fb9-8104-4009-be6b-8e7f78177b64 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.261389538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800213261362403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9f36fb9-8104-4009-be6b-8e7f78177b64 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.262137181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76116c4b-7263-440e-a2e3-c4ad226bd34a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.262202167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76116c4b-7263-440e-a2e3-c4ad226bd34a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.262470361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f3bdb96ea038e4a9bfc3447aa3080168656b9126298cf0f2d789db10532e57c,PodSandboxId:25d486e7f86dfa88de22b8ef13f45d252f3815609520dd850f408fccaf427af6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724800206897682156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7rj8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4812ff5-eeea-421a-ba2c-cf00058ea40b,},Annotations:map[string]string{io.kubernetes.container.hash: db014d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be133e5e395f6469f0e53bf94d6bf60c12fe350d53a206bd6465b4efa060a6c1,PodSandboxId:3682daa6aa4841ae854adbf852b8f7b250d32462163df547a93956e57aef51bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724800199673545937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vfc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9372e29c-8f48-4573-a970-4344acc7a4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e11d095c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42644d1c6d8e627a73fd82ee1b137384c691553a5ab2853bedf4e7b1d68d1a0b,PodSandboxId:d7810dd56c93a01e9dc2b1c486fe8fc68e9d87aa4225ceba87af2ba12c091e20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724800199444007835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
fd95d7-5a52-4a48-8553-2f05e0bb716d,},Annotations:map[string]string{io.kubernetes.container.hash: f0d4368,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66a9b594ebef5c6a12086f48ad486db62b11d3f35979a8e26e068c075b49976,PodSandboxId:84f509a3103efc9742c06be2c469c1254737c0c606bcf589cb08149fbbd158ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724800193473770710,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6969550993a124619bb7b24f2414dfc0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 67c0850b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8af9e0cd571bbb9fcdab7b84d9abd7b82e3e4361f62d310e0f8cb63a44adc83,PodSandboxId:b9c18d05ad495229fe350291de62db9b46106862debd6dc29f00ec59d9a45fd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724800193409575137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930311ce660dd046084a84813793e6a8,},Annotations:map[
string]string{io.kubernetes.container.hash: 73e453e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052358546b7b7c505719487291b6f91504fdf720084a5e0125420168ae815054,PodSandboxId:c6f2f3cd59bb7bcf00bee7fb2c51c4de3d67e72eed57f18e6edb290986531b22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724800193423511242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b53844351454f3b974dbc7224af95a,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c301ec776aec3a9fdf349d5aac3eedee8aa8faaa63646a670c8a66ae858b8030,PodSandboxId:7e65ed946bdc9005f4f9ccbd5e8cdc31c14948d4dfe21b572f18ba7d20f4c0b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724800193359475515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f067e8e31f4b46ab502c4df795acca3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76116c4b-7263-440e-a2e3-c4ad226bd34a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.303138646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca0555b6-64b0-458f-96c5-74fea53e2f8f name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.303231395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca0555b6-64b0-458f-96c5-74fea53e2f8f name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.304509380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c6e8b48-e24d-4c2a-bbae-87cee634735c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.305061381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800213305037556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c6e8b48-e24d-4c2a-bbae-87cee634735c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.305929941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f33568a0-3294-4876-8c1b-859d85065e57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.305992971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f33568a0-3294-4876-8c1b-859d85065e57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.306144239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f3bdb96ea038e4a9bfc3447aa3080168656b9126298cf0f2d789db10532e57c,PodSandboxId:25d486e7f86dfa88de22b8ef13f45d252f3815609520dd850f408fccaf427af6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724800206897682156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7rj8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4812ff5-eeea-421a-ba2c-cf00058ea40b,},Annotations:map[string]string{io.kubernetes.container.hash: db014d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be133e5e395f6469f0e53bf94d6bf60c12fe350d53a206bd6465b4efa060a6c1,PodSandboxId:3682daa6aa4841ae854adbf852b8f7b250d32462163df547a93956e57aef51bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724800199673545937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vfc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9372e29c-8f48-4573-a970-4344acc7a4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e11d095c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42644d1c6d8e627a73fd82ee1b137384c691553a5ab2853bedf4e7b1d68d1a0b,PodSandboxId:d7810dd56c93a01e9dc2b1c486fe8fc68e9d87aa4225ceba87af2ba12c091e20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724800199444007835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
fd95d7-5a52-4a48-8553-2f05e0bb716d,},Annotations:map[string]string{io.kubernetes.container.hash: f0d4368,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66a9b594ebef5c6a12086f48ad486db62b11d3f35979a8e26e068c075b49976,PodSandboxId:84f509a3103efc9742c06be2c469c1254737c0c606bcf589cb08149fbbd158ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724800193473770710,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6969550993a124619bb7b24f2414dfc0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 67c0850b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8af9e0cd571bbb9fcdab7b84d9abd7b82e3e4361f62d310e0f8cb63a44adc83,PodSandboxId:b9c18d05ad495229fe350291de62db9b46106862debd6dc29f00ec59d9a45fd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724800193409575137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930311ce660dd046084a84813793e6a8,},Annotations:map[
string]string{io.kubernetes.container.hash: 73e453e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052358546b7b7c505719487291b6f91504fdf720084a5e0125420168ae815054,PodSandboxId:c6f2f3cd59bb7bcf00bee7fb2c51c4de3d67e72eed57f18e6edb290986531b22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724800193423511242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b53844351454f3b974dbc7224af95a,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c301ec776aec3a9fdf349d5aac3eedee8aa8faaa63646a670c8a66ae858b8030,PodSandboxId:7e65ed946bdc9005f4f9ccbd5e8cdc31c14948d4dfe21b572f18ba7d20f4c0b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724800193359475515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f067e8e31f4b46ab502c4df795acca3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f33568a0-3294-4876-8c1b-859d85065e57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.337987338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99f94289-fe90-4dea-a637-9c96a0bd9569 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.338068595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99f94289-fe90-4dea-a637-9c96a0bd9569 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.339223778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=942f989a-29a4-4ecf-96de-dc14393209e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.339653395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800213339630383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=942f989a-29a4-4ecf-96de-dc14393209e2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.340179589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42caf32f-fc9d-4876-b184-cbd46b945d78 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.340243741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42caf32f-fc9d-4876-b184-cbd46b945d78 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:10:13 test-preload-594382 crio[659]: time="2024-08-27 23:10:13.340413688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f3bdb96ea038e4a9bfc3447aa3080168656b9126298cf0f2d789db10532e57c,PodSandboxId:25d486e7f86dfa88de22b8ef13f45d252f3815609520dd850f408fccaf427af6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724800206897682156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7rj8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4812ff5-eeea-421a-ba2c-cf00058ea40b,},Annotations:map[string]string{io.kubernetes.container.hash: db014d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be133e5e395f6469f0e53bf94d6bf60c12fe350d53a206bd6465b4efa060a6c1,PodSandboxId:3682daa6aa4841ae854adbf852b8f7b250d32462163df547a93956e57aef51bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724800199673545937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vfc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9372e29c-8f48-4573-a970-4344acc7a4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e11d095c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42644d1c6d8e627a73fd82ee1b137384c691553a5ab2853bedf4e7b1d68d1a0b,PodSandboxId:d7810dd56c93a01e9dc2b1c486fe8fc68e9d87aa4225ceba87af2ba12c091e20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724800199444007835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
fd95d7-5a52-4a48-8553-2f05e0bb716d,},Annotations:map[string]string{io.kubernetes.container.hash: f0d4368,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66a9b594ebef5c6a12086f48ad486db62b11d3f35979a8e26e068c075b49976,PodSandboxId:84f509a3103efc9742c06be2c469c1254737c0c606bcf589cb08149fbbd158ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724800193473770710,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6969550993a124619bb7b24f2414dfc0,},Annot
ations:map[string]string{io.kubernetes.container.hash: 67c0850b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8af9e0cd571bbb9fcdab7b84d9abd7b82e3e4361f62d310e0f8cb63a44adc83,PodSandboxId:b9c18d05ad495229fe350291de62db9b46106862debd6dc29f00ec59d9a45fd0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724800193409575137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930311ce660dd046084a84813793e6a8,},Annotations:map[
string]string{io.kubernetes.container.hash: 73e453e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052358546b7b7c505719487291b6f91504fdf720084a5e0125420168ae815054,PodSandboxId:c6f2f3cd59bb7bcf00bee7fb2c51c4de3d67e72eed57f18e6edb290986531b22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724800193423511242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b53844351454f3b974dbc7224af95a,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c301ec776aec3a9fdf349d5aac3eedee8aa8faaa63646a670c8a66ae858b8030,PodSandboxId:7e65ed946bdc9005f4f9ccbd5e8cdc31c14948d4dfe21b572f18ba7d20f4c0b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724800193359475515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-594382,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f067e8e31f4b46ab502c4df795acca3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42caf32f-fc9d-4876-b184-cbd46b945d78 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1f3bdb96ea038       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   25d486e7f86df       coredns-6d4b75cb6d-7rj8d
	be133e5e395f6       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   3682daa6aa484       kube-proxy-6vfc7
	42644d1c6d8e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   d7810dd56c93a       storage-provisioner
	d66a9b594ebef       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   84f509a3103ef       etcd-test-preload-594382
	052358546b7b7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   c6f2f3cd59bb7       kube-controller-manager-test-preload-594382
	d8af9e0cd571b       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   b9c18d05ad495       kube-apiserver-test-preload-594382
	c301ec776aec3       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   7e65ed946bdc9       kube-scheduler-test-preload-594382
	
	
	==> coredns [1f3bdb96ea038e4a9bfc3447aa3080168656b9126298cf0f2d789db10532e57c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:57675 - 63414 "HINFO IN 5982207289855225266.6376983561241909571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016682161s
	
	
	==> describe nodes <==
	Name:               test-preload-594382
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-594382
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=test-preload-594382
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T23_06_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:06:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-594382
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:10:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:10:08 +0000   Tue, 27 Aug 2024 23:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:10:08 +0000   Tue, 27 Aug 2024 23:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:10:08 +0000   Tue, 27 Aug 2024 23:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:10:08 +0000   Tue, 27 Aug 2024 23:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    test-preload-594382
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 097a68a03b6249beadde21b35a3424c9
	  System UUID:                097a68a0-3b62-49be-adde-21b35a3424c9
	  Boot ID:                    904c408d-cc4e-4cc6-a48a-4dce4193a4df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7rj8d                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m45s
	  kube-system                 etcd-test-preload-594382                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m57s
	  kube-system                 kube-apiserver-test-preload-594382             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-controller-manager-test-preload-594382    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-proxy-6vfc7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 kube-scheduler-test-preload-594382             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 3m42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m6s (x5 over 4m6s)  kubelet          Node test-preload-594382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x5 over 4m6s)  kubelet          Node test-preload-594382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x4 over 4m6s)  kubelet          Node test-preload-594382 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m57s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m57s                kubelet          Node test-preload-594382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s                kubelet          Node test-preload-594382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s                kubelet          Node test-preload-594382 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m47s                kubelet          Node test-preload-594382 status is now: NodeReady
	  Normal  RegisteredNode           3m46s                node-controller  Node test-preload-594382 event: Registered Node test-preload-594382 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-594382 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-594382 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-594382 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-594382 event: Registered Node test-preload-594382 in Controller
	
	
	==> dmesg <==
	[Aug27 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050019] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.845107] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.514581] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.398022] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.057184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051110] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.196451] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.108610] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.253114] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +13.075054] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.057964] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.536883] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +5.526200] kauditd_printk_skb: 105 callbacks suppressed
	[Aug27 23:10] systemd-fstab-generator[1727]: Ignoring "noauto" option for root device
	[  +6.164272] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d66a9b594ebef5c6a12086f48ad486db62b11d3f35979a8e26e068c075b49976] <==
	{"level":"info","ts":"2024-08-27T23:09:53.828Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"46b6e3fd62fd4110","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-27T23:09:53.829Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-27T23:09:53.829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 switched to configuration voters=(5095510705843290384)"}
	{"level":"info","ts":"2024-08-27T23:09:53.829Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f5f955826d71045b","local-member-id":"46b6e3fd62fd4110","added-peer-id":"46b6e3fd62fd4110","added-peer-peer-urls":["https://192.168.39.25:2380"]}
	{"level":"info","ts":"2024-08-27T23:09:53.829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f5f955826d71045b","local-member-id":"46b6e3fd62fd4110","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:09:53.829Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:09:53.834Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T23:09:53.838Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"46b6e3fd62fd4110","initial-advertise-peer-urls":["https://192.168.39.25:2380"],"listen-peer-urls":["https://192.168.39.25:2380"],"advertise-client-urls":["https://192.168.39.25:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.25:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T23:09:53.838Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:09:53.838Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.25:2380"}
	{"level":"info","ts":"2024-08-27T23:09:53.838Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.25:2380"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 received MsgPreVoteResp from 46b6e3fd62fd4110 at term 2"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 received MsgVoteResp from 46b6e3fd62fd4110 at term 3"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46b6e3fd62fd4110 elected leader 46b6e3fd62fd4110 at term 3"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"46b6e3fd62fd4110","local-member-attributes":"{Name:test-preload-594382 ClientURLs:[https://192.168.39.25:2379]}","request-path":"/0/members/46b6e3fd62fd4110/attributes","cluster-id":"f5f955826d71045b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:09:55.494Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:09:55.496Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.25:2379"}
	{"level":"info","ts":"2024-08-27T23:09:55.496Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:09:55.497Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:09:55.497Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:09:55.497Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:10:13 up 0 min,  0 users,  load average: 0.35, 0.10, 0.04
	Linux test-preload-594382 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d8af9e0cd571bbb9fcdab7b84d9abd7b82e3e4361f62d310e0f8cb63a44adc83] <==
	I0827 23:09:57.824358       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0827 23:09:57.824396       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 23:09:57.824696       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 23:09:57.881005       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0827 23:09:57.881034       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0827 23:09:57.822376       1 controller.go:80] Starting OpenAPI V3 AggregationController
	E0827 23:09:57.927246       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0827 23:09:57.978498       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0827 23:09:57.981073       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0827 23:09:57.981982       1 cache.go:39] Caches are synced for autoregister controller
	I0827 23:09:58.010693       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0827 23:09:58.023611       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 23:09:58.023695       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0827 23:09:58.023912       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 23:09:58.024072       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0827 23:09:58.503743       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0827 23:09:58.827272       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0827 23:09:59.397372       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0827 23:09:59.413541       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0827 23:09:59.469441       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0827 23:09:59.491739       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 23:09:59.501298       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 23:09:59.922287       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0827 23:10:10.330012       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 23:10:10.554723       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [052358546b7b7c505719487291b6f91504fdf720084a5e0125420168ae815054] <==
	I0827 23:10:10.302729       1 shared_informer.go:262] Caches are synced for disruption
	I0827 23:10:10.302822       1 disruption.go:371] Sending events to api server.
	I0827 23:10:10.306107       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0827 23:10:10.306149       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0827 23:10:10.306216       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0827 23:10:10.306245       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0827 23:10:10.308587       1 shared_informer.go:262] Caches are synced for TTL
	I0827 23:10:10.314344       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0827 23:10:10.317854       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0827 23:10:10.320248       1 shared_informer.go:262] Caches are synced for daemon sets
	I0827 23:10:10.325279       1 shared_informer.go:262] Caches are synced for deployment
	I0827 23:10:10.329379       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0827 23:10:10.374892       1 shared_informer.go:262] Caches are synced for PVC protection
	I0827 23:10:10.389382       1 shared_informer.go:262] Caches are synced for persistent volume
	I0827 23:10:10.397754       1 shared_informer.go:262] Caches are synced for ephemeral
	I0827 23:10:10.412166       1 shared_informer.go:262] Caches are synced for expand
	I0827 23:10:10.416616       1 shared_informer.go:262] Caches are synced for stateful set
	I0827 23:10:10.439200       1 shared_informer.go:262] Caches are synced for attach detach
	I0827 23:10:10.479558       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0827 23:10:10.529674       1 shared_informer.go:262] Caches are synced for resource quota
	I0827 23:10:10.542659       1 shared_informer.go:262] Caches are synced for endpoint
	I0827 23:10:10.542755       1 shared_informer.go:262] Caches are synced for resource quota
	I0827 23:10:10.971121       1 shared_informer.go:262] Caches are synced for garbage collector
	I0827 23:10:11.013747       1 shared_informer.go:262] Caches are synced for garbage collector
	I0827 23:10:11.013780       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [be133e5e395f6469f0e53bf94d6bf60c12fe350d53a206bd6465b4efa060a6c1] <==
	I0827 23:09:59.856704       1 node.go:163] Successfully retrieved node IP: 192.168.39.25
	I0827 23:09:59.857021       1 server_others.go:138] "Detected node IP" address="192.168.39.25"
	I0827 23:09:59.857107       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0827 23:09:59.905266       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0827 23:09:59.905304       1 server_others.go:206] "Using iptables Proxier"
	I0827 23:09:59.905616       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0827 23:09:59.906528       1 server.go:661] "Version info" version="v1.24.4"
	I0827 23:09:59.906592       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:09:59.908891       1 config.go:317] "Starting service config controller"
	I0827 23:09:59.909238       1 config.go:226] "Starting endpoint slice config controller"
	I0827 23:09:59.909266       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0827 23:09:59.909540       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0827 23:09:59.916102       1 config.go:444] "Starting node config controller"
	I0827 23:09:59.916131       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0827 23:10:00.009914       1 shared_informer.go:262] Caches are synced for service config
	I0827 23:10:00.011466       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0827 23:10:00.017015       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c301ec776aec3a9fdf349d5aac3eedee8aa8faaa63646a670c8a66ae858b8030] <==
	I0827 23:09:54.282264       1 serving.go:348] Generated self-signed cert in-memory
	W0827 23:09:57.896643       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 23:09:57.896848       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 23:09:57.896887       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:09:57.896955       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:09:57.931541       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0827 23:09:57.931662       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:09:57.940051       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0827 23:09:57.940657       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:09:57.940961       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:09:57.945320       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0827 23:09:58.046212       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 23:09:57 test-preload-594382 kubelet[1119]: I0827 23:09:57.965999    1119 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-594382"
	Aug 27 23:09:57 test-preload-594382 kubelet[1119]: I0827 23:09:57.970061    1119 setters.go:532] "Node became not ready" node="test-preload-594382" condition={Type:Ready Status:False LastHeartbeatTime:2024-08-27 23:09:57.969966402 +0000 UTC m=+5.413230117 LastTransitionTime:2024-08-27 23:09:57.969966402 +0000 UTC m=+5.413230117 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.677102    1119 apiserver.go:52] "Watching apiserver"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.682630    1119 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.682736    1119 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.682775    1119 topology_manager.go:200] "Topology Admit Handler"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: E0827 23:09:58.685521    1119 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7rj8d" podUID=b4812ff5-eeea-421a-ba2c-cf00058ea40b
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745563    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nq2\" (UniqueName: \"kubernetes.io/projected/b4812ff5-eeea-421a-ba2c-cf00058ea40b-kube-api-access-z9nq2\") pod \"coredns-6d4b75cb6d-7rj8d\" (UID: \"b4812ff5-eeea-421a-ba2c-cf00058ea40b\") " pod="kube-system/coredns-6d4b75cb6d-7rj8d"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745604    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9372e29c-8f48-4573-a970-4344acc7a4ae-xtables-lock\") pod \"kube-proxy-6vfc7\" (UID: \"9372e29c-8f48-4573-a970-4344acc7a4ae\") " pod="kube-system/kube-proxy-6vfc7"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745631    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9454\" (UniqueName: \"kubernetes.io/projected/f2fd95d7-5a52-4a48-8553-2f05e0bb716d-kube-api-access-k9454\") pod \"storage-provisioner\" (UID: \"f2fd95d7-5a52-4a48-8553-2f05e0bb716d\") " pod="kube-system/storage-provisioner"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745658    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume\") pod \"coredns-6d4b75cb6d-7rj8d\" (UID: \"b4812ff5-eeea-421a-ba2c-cf00058ea40b\") " pod="kube-system/coredns-6d4b75cb6d-7rj8d"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745677    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4pt\" (UniqueName: \"kubernetes.io/projected/9372e29c-8f48-4573-a970-4344acc7a4ae-kube-api-access-6f4pt\") pod \"kube-proxy-6vfc7\" (UID: \"9372e29c-8f48-4573-a970-4344acc7a4ae\") " pod="kube-system/kube-proxy-6vfc7"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745697    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f2fd95d7-5a52-4a48-8553-2f05e0bb716d-tmp\") pod \"storage-provisioner\" (UID: \"f2fd95d7-5a52-4a48-8553-2f05e0bb716d\") " pod="kube-system/storage-provisioner"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745719    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9372e29c-8f48-4573-a970-4344acc7a4ae-kube-proxy\") pod \"kube-proxy-6vfc7\" (UID: \"9372e29c-8f48-4573-a970-4344acc7a4ae\") " pod="kube-system/kube-proxy-6vfc7"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745740    1119 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9372e29c-8f48-4573-a970-4344acc7a4ae-lib-modules\") pod \"kube-proxy-6vfc7\" (UID: \"9372e29c-8f48-4573-a970-4344acc7a4ae\") " pod="kube-system/kube-proxy-6vfc7"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: I0827 23:09:58.745756    1119 reconciler.go:159] "Reconciler: start to sync state"
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: E0827 23:09:58.849295    1119 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 27 23:09:58 test-preload-594382 kubelet[1119]: E0827 23:09:58.849407    1119 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume podName:b4812ff5-eeea-421a-ba2c-cf00058ea40b nodeName:}" failed. No retries permitted until 2024-08-27 23:09:59.349370465 +0000 UTC m=+6.792634252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume") pod "coredns-6d4b75cb6d-7rj8d" (UID: "b4812ff5-eeea-421a-ba2c-cf00058ea40b") : object "kube-system"/"coredns" not registered
	Aug 27 23:09:59 test-preload-594382 kubelet[1119]: E0827 23:09:59.351358    1119 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 27 23:09:59 test-preload-594382 kubelet[1119]: E0827 23:09:59.351446    1119 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume podName:b4812ff5-eeea-421a-ba2c-cf00058ea40b nodeName:}" failed. No retries permitted until 2024-08-27 23:10:00.351429746 +0000 UTC m=+7.794693462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume") pod "coredns-6d4b75cb6d-7rj8d" (UID: "b4812ff5-eeea-421a-ba2c-cf00058ea40b") : object "kube-system"/"coredns" not registered
	Aug 27 23:10:00 test-preload-594382 kubelet[1119]: E0827 23:10:00.360868    1119 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 27 23:10:00 test-preload-594382 kubelet[1119]: E0827 23:10:00.360963    1119 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume podName:b4812ff5-eeea-421a-ba2c-cf00058ea40b nodeName:}" failed. No retries permitted until 2024-08-27 23:10:02.360947259 +0000 UTC m=+9.804210989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume") pod "coredns-6d4b75cb6d-7rj8d" (UID: "b4812ff5-eeea-421a-ba2c-cf00058ea40b") : object "kube-system"/"coredns" not registered
	Aug 27 23:10:00 test-preload-594382 kubelet[1119]: E0827 23:10:00.770934    1119 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7rj8d" podUID=b4812ff5-eeea-421a-ba2c-cf00058ea40b
	Aug 27 23:10:02 test-preload-594382 kubelet[1119]: E0827 23:10:02.380011    1119 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 27 23:10:02 test-preload-594382 kubelet[1119]: E0827 23:10:02.380470    1119 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume podName:b4812ff5-eeea-421a-ba2c-cf00058ea40b nodeName:}" failed. No retries permitted until 2024-08-27 23:10:06.380448387 +0000 UTC m=+13.823712103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b4812ff5-eeea-421a-ba2c-cf00058ea40b-config-volume") pod "coredns-6d4b75cb6d-7rj8d" (UID: "b4812ff5-eeea-421a-ba2c-cf00058ea40b") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [42644d1c6d8e627a73fd82ee1b137384c691553a5ab2853bedf4e7b1d68d1a0b] <==
	I0827 23:09:59.541354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-594382 -n test-preload-594382
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-594382 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-594382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-594382
--- FAIL: TestPreload (310.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (330.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.369436813s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-772694] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-772694" primary control-plane node in "kubernetes-upgrade-772694" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:15:56.592569   57186 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:15:56.592843   57186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:56.592853   57186 out.go:358] Setting ErrFile to fd 2...
	I0827 23:15:56.592857   57186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:56.593053   57186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:15:56.593696   57186 out.go:352] Setting JSON to false
	I0827 23:15:56.594665   57186 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7104,"bootTime":1724793453,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:15:56.594721   57186 start.go:139] virtualization: kvm guest
	I0827 23:15:56.596984   57186 out.go:177] * [kubernetes-upgrade-772694] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:15:56.598281   57186 notify.go:220] Checking for updates...
	I0827 23:15:56.598286   57186 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:15:56.599567   57186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:15:56.600810   57186 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:15:56.602004   57186 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:15:56.603190   57186 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:15:56.604381   57186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:15:56.605980   57186 config.go:182] Loaded profile config "NoKubernetes-887820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0827 23:15:56.606073   57186 config.go:182] Loaded profile config "cert-expiration-649861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:15:56.606162   57186 config.go:182] Loaded profile config "running-upgrade-906048": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0827 23:15:56.606230   57186 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:15:56.643006   57186 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 23:15:56.644354   57186 start.go:297] selected driver: kvm2
	I0827 23:15:56.644381   57186 start.go:901] validating driver "kvm2" against <nil>
	I0827 23:15:56.644403   57186 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:15:56.645399   57186 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:15:56.645511   57186 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:15:56.661717   57186 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:15:56.661764   57186 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:15:56.661991   57186 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 23:15:56.662020   57186 cni.go:84] Creating CNI manager for ""
	I0827 23:15:56.662029   57186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:15:56.662038   57186 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 23:15:56.662080   57186 start.go:340] cluster config:
	{Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:15:56.662166   57186 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:15:56.663993   57186 out.go:177] * Starting "kubernetes-upgrade-772694" primary control-plane node in "kubernetes-upgrade-772694" cluster
	I0827 23:15:56.665208   57186 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 23:15:56.665244   57186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0827 23:15:56.665254   57186 cache.go:56] Caching tarball of preloaded images
	I0827 23:15:56.665336   57186 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 23:15:56.665347   57186 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0827 23:15:56.665442   57186 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/config.json ...
	I0827 23:15:56.665460   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/config.json: {Name:mk4f14cb72960a71cdf1b0617c8f0b8bb55cd803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:15:56.665581   57186 start.go:360] acquireMachinesLock for kubernetes-upgrade-772694: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:15:57.704759   57186 start.go:364] duration metric: took 1.039143336s to acquireMachinesLock for "kubernetes-upgrade-772694"
	I0827 23:15:57.704838   57186 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:15:57.704949   57186 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 23:15:57.707461   57186 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 23:15:57.707637   57186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:15:57.707678   57186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:15:57.727459   57186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I0827 23:15:57.727877   57186 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:15:57.728380   57186 main.go:141] libmachine: Using API Version  1
	I0827 23:15:57.728405   57186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:15:57.728828   57186 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:15:57.729023   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:15:57.729196   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:15:57.729351   57186 start.go:159] libmachine.API.Create for "kubernetes-upgrade-772694" (driver="kvm2")
	I0827 23:15:57.729383   57186 client.go:168] LocalClient.Create starting
	I0827 23:15:57.729427   57186 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 23:15:57.729469   57186 main.go:141] libmachine: Decoding PEM data...
	I0827 23:15:57.729496   57186 main.go:141] libmachine: Parsing certificate...
	I0827 23:15:57.729589   57186 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 23:15:57.729617   57186 main.go:141] libmachine: Decoding PEM data...
	I0827 23:15:57.729637   57186 main.go:141] libmachine: Parsing certificate...
	I0827 23:15:57.729668   57186 main.go:141] libmachine: Running pre-create checks...
	I0827 23:15:57.729689   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .PreCreateCheck
	I0827 23:15:57.730058   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetConfigRaw
	I0827 23:15:57.730519   57186 main.go:141] libmachine: Creating machine...
	I0827 23:15:57.730536   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .Create
	I0827 23:15:57.730684   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Creating KVM machine...
	I0827 23:15:57.731766   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found existing default KVM network
	I0827 23:15:57.733114   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.732965   57208 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2a:88:a0} reservation:<nil>}
	I0827 23:15:57.734198   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.734094   57208 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:19:fd} reservation:<nil>}
	I0827 23:15:57.735103   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.735016   57208 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:c0:9a} reservation:<nil>}
	I0827 23:15:57.737240   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.737113   57208 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0827 23:15:57.738423   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.738357   57208 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c40}
	I0827 23:15:57.738486   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | created network xml: 
	I0827 23:15:57.738513   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | <network>
	I0827 23:15:57.738530   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   <name>mk-kubernetes-upgrade-772694</name>
	I0827 23:15:57.738544   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   <dns enable='no'/>
	I0827 23:15:57.738555   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   
	I0827 23:15:57.738567   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0827 23:15:57.738578   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |     <dhcp>
	I0827 23:15:57.738593   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0827 23:15:57.738607   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |     </dhcp>
	I0827 23:15:57.738619   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   </ip>
	I0827 23:15:57.738628   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG |   
	I0827 23:15:57.738637   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | </network>
	I0827 23:15:57.738650   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | 
	I0827 23:15:57.744109   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | trying to create private KVM network mk-kubernetes-upgrade-772694 192.168.83.0/24...
	I0827 23:15:57.812490   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | private KVM network mk-kubernetes-upgrade-772694 192.168.83.0/24 created
	I0827 23:15:57.812531   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694 ...
	I0827 23:15:57.812545   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:57.812433   57208 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:15:57.812574   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 23:15:57.812595   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 23:15:58.043696   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:58.043537   57208 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa...
	I0827 23:15:58.185243   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:58.185122   57208 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/kubernetes-upgrade-772694.rawdisk...
	I0827 23:15:58.185270   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Writing magic tar header
	I0827 23:15:58.185284   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Writing SSH key tar header
	I0827 23:15:58.185292   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:58.185235   57208 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694 ...
	I0827 23:15:58.185382   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694
	I0827 23:15:58.185411   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 23:15:58.185430   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694 (perms=drwx------)
	I0827 23:15:58.185444   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:15:58.185462   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 23:15:58.185471   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 23:15:58.185479   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home/jenkins
	I0827 23:15:58.185492   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 23:15:58.185504   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Checking permissions on dir: /home
	I0827 23:15:58.185518   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 23:15:58.185530   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 23:15:58.185538   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Skipping /home - not owner
	I0827 23:15:58.185554   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 23:15:58.185566   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 23:15:58.185581   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Creating domain...
	I0827 23:15:58.186754   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) define libvirt domain using xml: 
	I0827 23:15:58.186780   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) <domain type='kvm'>
	I0827 23:15:58.186792   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <name>kubernetes-upgrade-772694</name>
	I0827 23:15:58.186801   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <memory unit='MiB'>2200</memory>
	I0827 23:15:58.186825   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <vcpu>2</vcpu>
	I0827 23:15:58.186839   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <features>
	I0827 23:15:58.186847   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <acpi/>
	I0827 23:15:58.186852   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <apic/>
	I0827 23:15:58.186860   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <pae/>
	I0827 23:15:58.186865   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     
	I0827 23:15:58.186873   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   </features>
	I0827 23:15:58.186879   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <cpu mode='host-passthrough'>
	I0827 23:15:58.186909   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   
	I0827 23:15:58.186925   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   </cpu>
	I0827 23:15:58.186934   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <os>
	I0827 23:15:58.186943   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <type>hvm</type>
	I0827 23:15:58.186955   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <boot dev='cdrom'/>
	I0827 23:15:58.186963   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <boot dev='hd'/>
	I0827 23:15:58.186969   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <bootmenu enable='no'/>
	I0827 23:15:58.186976   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   </os>
	I0827 23:15:58.186982   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   <devices>
	I0827 23:15:58.186993   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <disk type='file' device='cdrom'>
	I0827 23:15:58.187053   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/boot2docker.iso'/>
	I0827 23:15:58.187075   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <target dev='hdc' bus='scsi'/>
	I0827 23:15:58.187089   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <readonly/>
	I0827 23:15:58.187100   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </disk>
	I0827 23:15:58.187113   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <disk type='file' device='disk'>
	I0827 23:15:58.187126   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 23:15:58.187143   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/kubernetes-upgrade-772694.rawdisk'/>
	I0827 23:15:58.187159   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <target dev='hda' bus='virtio'/>
	I0827 23:15:58.187172   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </disk>
	I0827 23:15:58.187182   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <interface type='network'>
	I0827 23:15:58.187195   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <source network='mk-kubernetes-upgrade-772694'/>
	I0827 23:15:58.187206   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <model type='virtio'/>
	I0827 23:15:58.187217   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </interface>
	I0827 23:15:58.187232   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <interface type='network'>
	I0827 23:15:58.187245   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <source network='default'/>
	I0827 23:15:58.187256   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <model type='virtio'/>
	I0827 23:15:58.187269   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </interface>
	I0827 23:15:58.187279   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <serial type='pty'>
	I0827 23:15:58.187291   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <target port='0'/>
	I0827 23:15:58.187304   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </serial>
	I0827 23:15:58.187317   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <console type='pty'>
	I0827 23:15:58.187328   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <target type='serial' port='0'/>
	I0827 23:15:58.187339   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </console>
	I0827 23:15:58.187350   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     <rng model='virtio'>
	I0827 23:15:58.187363   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)       <backend model='random'>/dev/random</backend>
	I0827 23:15:58.187377   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     </rng>
	I0827 23:15:58.187387   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     
	I0827 23:15:58.187397   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)     
	I0827 23:15:58.187412   57186 main.go:141] libmachine: (kubernetes-upgrade-772694)   </devices>
	I0827 23:15:58.187420   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) </domain>
	I0827 23:15:58.187431   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) 
	I0827 23:15:58.191750   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:75:f4:73 in network default
	I0827 23:15:58.192390   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Ensuring networks are active...
	I0827 23:15:58.192433   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:15:58.193214   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Ensuring network default is active
	I0827 23:15:58.193624   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Ensuring network mk-kubernetes-upgrade-772694 is active
	I0827 23:15:58.194269   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Getting domain xml...
	I0827 23:15:58.195207   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Creating domain...
	I0827 23:15:59.448113   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Waiting to get IP...
	I0827 23:15:59.448939   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.449318   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.449376   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:59.449321   57208 retry.go:31] will retry after 214.136652ms: waiting for machine to come up
	I0827 23:15:59.664839   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.665434   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.665464   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:59.665390   57208 retry.go:31] will retry after 310.141224ms: waiting for machine to come up
	I0827 23:15:59.976850   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.977314   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:15:59.977345   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:15:59.977276   57208 retry.go:31] will retry after 479.907115ms: waiting for machine to come up
	I0827 23:16:00.458558   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:00.459130   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:00.459161   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:00.459077   57208 retry.go:31] will retry after 573.388671ms: waiting for machine to come up
	I0827 23:16:01.033551   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:01.034068   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:01.034092   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:01.034021   57208 retry.go:31] will retry after 470.151615ms: waiting for machine to come up
	I0827 23:16:01.505474   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:01.505982   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:01.506009   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:01.505921   57208 retry.go:31] will retry after 879.179323ms: waiting for machine to come up
	I0827 23:16:02.386342   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:02.386986   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:02.387015   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:02.386943   57208 retry.go:31] will retry after 762.317638ms: waiting for machine to come up
	I0827 23:16:03.151457   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:03.152008   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:03.152040   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:03.151956   57208 retry.go:31] will retry after 1.098205339s: waiting for machine to come up
	I0827 23:16:04.251536   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:04.252080   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:04.252110   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:04.252043   57208 retry.go:31] will retry after 1.205882815s: waiting for machine to come up
	I0827 23:16:05.459122   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:05.459746   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:05.459774   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:05.459697   57208 retry.go:31] will retry after 2.147159742s: waiting for machine to come up
	I0827 23:16:07.610111   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:07.610637   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:07.610666   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:07.610583   57208 retry.go:31] will retry after 2.596259663s: waiting for machine to come up
	I0827 23:16:10.209315   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:10.221131   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:10.221169   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:10.221065   57208 retry.go:31] will retry after 2.90962479s: waiting for machine to come up
	I0827 23:16:13.132415   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:13.132987   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:13.133009   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:13.132939   57208 retry.go:31] will retry after 3.481307193s: waiting for machine to come up
	I0827 23:16:16.617456   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:16.617934   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find current IP address of domain kubernetes-upgrade-772694 in network mk-kubernetes-upgrade-772694
	I0827 23:16:16.617959   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | I0827 23:16:16.617868   57208 retry.go:31] will retry after 3.914121726s: waiting for machine to come up
	I0827 23:16:20.535191   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.535700   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Found IP for machine: 192.168.83.89
	I0827 23:16:20.535737   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has current primary IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.535751   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Reserving static IP address...
	I0827 23:16:20.536118   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-772694", mac: "52:54:00:0b:42:dc", ip: "192.168.83.89"} in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.609850   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Getting to WaitForSSH function...
	I0827 23:16:20.609884   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Reserved static IP address: 192.168.83.89
	I0827 23:16:20.609899   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Waiting for SSH to be available...
	I0827 23:16:20.612507   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.613011   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:20.613040   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.613195   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Using SSH client type: external
	I0827 23:16:20.613227   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Using SSH private key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa (-rw-------)
	I0827 23:16:20.613266   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0827 23:16:20.613280   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | About to run SSH command:
	I0827 23:16:20.613293   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | exit 0
	I0827 23:16:20.740153   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | SSH cmd err, output: <nil>: 
	I0827 23:16:20.740526   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) KVM machine creation complete!
	I0827 23:16:20.740822   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetConfigRaw
	I0827 23:16:20.741486   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:20.741673   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:20.741835   57186 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0827 23:16:20.741851   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetState
	I0827 23:16:20.742951   57186 main.go:141] libmachine: Detecting operating system of created instance...
	I0827 23:16:20.742963   57186 main.go:141] libmachine: Waiting for SSH to be available...
	I0827 23:16:20.742968   57186 main.go:141] libmachine: Getting to WaitForSSH function...
	I0827 23:16:20.742974   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:20.745403   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.745816   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:20.745848   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.745968   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:20.746146   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.746290   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.746436   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:20.746604   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:20.746793   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:20.746805   57186 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0827 23:16:20.855505   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:16:20.855532   57186 main.go:141] libmachine: Detecting the provisioner...
	I0827 23:16:20.855542   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:20.858916   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.859375   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:20.859412   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.859614   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:20.859841   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.860017   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.860182   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:20.860426   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:20.860612   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:20.860627   57186 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0827 23:16:20.972907   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0827 23:16:20.973026   57186 main.go:141] libmachine: found compatible host: buildroot
	I0827 23:16:20.973040   57186 main.go:141] libmachine: Provisioning with buildroot...
	I0827 23:16:20.973049   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:16:20.973316   57186 buildroot.go:166] provisioning hostname "kubernetes-upgrade-772694"
	I0827 23:16:20.973347   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:16:20.973521   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:20.976010   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.976459   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:20.976499   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:20.976800   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:20.977041   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.977238   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:20.977400   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:20.977580   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:20.977799   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:20.977819   57186 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-772694 && echo "kubernetes-upgrade-772694" | sudo tee /etc/hostname
	I0827 23:16:21.102604   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-772694
	
	I0827 23:16:21.102634   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.105331   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.105672   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.105699   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.105908   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:21.106131   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.106305   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.106436   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:21.106556   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:21.106772   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:21.106796   57186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-772694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-772694/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-772694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:16:21.224590   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:16:21.224617   57186 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 23:16:21.224644   57186 buildroot.go:174] setting up certificates
	I0827 23:16:21.224652   57186 provision.go:84] configureAuth start
	I0827 23:16:21.224669   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:16:21.224963   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:16:21.227566   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.227936   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.228001   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.228181   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.230298   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.230635   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.230672   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.230820   57186 provision.go:143] copyHostCerts
	I0827 23:16:21.230880   57186 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 23:16:21.230896   57186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 23:16:21.230948   57186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 23:16:21.231027   57186 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 23:16:21.231035   57186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 23:16:21.231054   57186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 23:16:21.231104   57186 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 23:16:21.231112   57186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 23:16:21.231127   57186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 23:16:21.231170   57186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-772694 san=[127.0.0.1 192.168.83.89 kubernetes-upgrade-772694 localhost minikube]
	I0827 23:16:21.410196   57186 provision.go:177] copyRemoteCerts
	I0827 23:16:21.410247   57186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:16:21.410269   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.412868   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.413236   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.413262   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.413474   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:21.413672   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.413815   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:21.414028   57186 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:16:21.498639   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 23:16:21.521560   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0827 23:16:21.544424   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:16:21.567383   57186 provision.go:87] duration metric: took 342.720834ms to configureAuth
	I0827 23:16:21.567412   57186 buildroot.go:189] setting minikube options for container-runtime
	I0827 23:16:21.567585   57186 config.go:182] Loaded profile config "kubernetes-upgrade-772694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0827 23:16:21.567665   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.570274   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.570597   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.570620   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.570770   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:21.570942   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.571089   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.571240   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:21.571574   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:21.571735   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:21.571749   57186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 23:16:21.792897   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 23:16:21.792923   57186 main.go:141] libmachine: Checking connection to Docker...
	I0827 23:16:21.792931   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetURL
	I0827 23:16:21.794398   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | Using libvirt version 6000000
	I0827 23:16:21.796571   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.796920   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.796951   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.797076   57186 main.go:141] libmachine: Docker is up and running!
	I0827 23:16:21.797092   57186 main.go:141] libmachine: Reticulating splines...
	I0827 23:16:21.797099   57186 client.go:171] duration metric: took 24.067705909s to LocalClient.Create
	I0827 23:16:21.797139   57186 start.go:167] duration metric: took 24.067776746s to libmachine.API.Create "kubernetes-upgrade-772694"
	I0827 23:16:21.797155   57186 start.go:293] postStartSetup for "kubernetes-upgrade-772694" (driver="kvm2")
	I0827 23:16:21.797169   57186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:16:21.797192   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:21.797557   57186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:16:21.797591   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.799658   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.800006   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.800033   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.800158   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:21.800361   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.800517   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:21.800671   57186 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:16:21.886468   57186 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:16:21.890651   57186 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 23:16:21.890675   57186 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 23:16:21.890742   57186 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 23:16:21.890820   57186 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 23:16:21.890916   57186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:16:21.899999   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:16:21.922561   57186 start.go:296] duration metric: took 125.391113ms for postStartSetup
	I0827 23:16:21.922623   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetConfigRaw
	I0827 23:16:21.923205   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:16:21.925640   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.926005   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.926032   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.926327   57186 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/config.json ...
	I0827 23:16:21.926529   57186 start.go:128] duration metric: took 24.221568321s to createHost
	I0827 23:16:21.926553   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:21.928809   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.929185   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:21.929218   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:21.929334   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:21.929539   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.929713   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:21.929860   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:21.930037   57186 main.go:141] libmachine: Using SSH client type: native
	I0827 23:16:21.930235   57186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:16:21.930247   57186 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 23:16:22.041161   57186 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724800582.017935954
	
	I0827 23:16:22.041182   57186 fix.go:216] guest clock: 1724800582.017935954
	I0827 23:16:22.041189   57186 fix.go:229] Guest: 2024-08-27 23:16:22.017935954 +0000 UTC Remote: 2024-08-27 23:16:21.926540119 +0000 UTC m=+25.369001221 (delta=91.395835ms)
	I0827 23:16:22.041208   57186 fix.go:200] guest clock delta is within tolerance: 91.395835ms
	I0827 23:16:22.041214   57186 start.go:83] releasing machines lock for "kubernetes-upgrade-772694", held for 24.336413193s
	I0827 23:16:22.041234   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:22.041554   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:16:22.044679   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.045051   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:22.045075   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.045231   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:22.045737   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:22.045918   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:16:22.046011   57186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:16:22.046072   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:22.046132   57186 ssh_runner.go:195] Run: cat /version.json
	I0827 23:16:22.046179   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:16:22.049950   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.050128   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.050354   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:22.050377   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.050545   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:22.050731   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:22.050760   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:22.050787   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:16:22.050798   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:22.050910   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:16:22.050924   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:22.051066   57186 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:16:22.051078   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:16:22.051267   57186 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:16:22.181568   57186 ssh_runner.go:195] Run: systemctl --version
	I0827 23:16:22.187788   57186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 23:16:22.349091   57186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 23:16:22.355058   57186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 23:16:22.355144   57186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:16:22.374818   57186 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0827 23:16:22.374845   57186 start.go:495] detecting cgroup driver to use...
	I0827 23:16:22.374909   57186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 23:16:22.391529   57186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 23:16:22.405190   57186 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:16:22.405292   57186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:16:22.418962   57186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:16:22.435420   57186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:16:22.565746   57186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:16:22.728589   57186 docker.go:233] disabling docker service ...
	I0827 23:16:22.728681   57186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:16:22.746241   57186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:16:22.758452   57186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:16:22.866836   57186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:16:22.981425   57186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:16:22.999292   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:16:23.018993   57186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0827 23:16:23.019061   57186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:16:23.029122   57186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 23:16:23.029195   57186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:16:23.039141   57186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:16:23.049421   57186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:16:23.059555   57186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:16:23.069498   57186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:16:23.078243   57186 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0827 23:16:23.078314   57186 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0827 23:16:23.092277   57186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:16:23.103656   57186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:16:23.227516   57186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 23:16:23.323604   57186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 23:16:23.323718   57186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 23:16:23.329786   57186 start.go:563] Will wait 60s for crictl version
	I0827 23:16:23.329849   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:23.333848   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:16:23.374976   57186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 23:16:23.375061   57186 ssh_runner.go:195] Run: crio --version
	I0827 23:16:23.406700   57186 ssh_runner.go:195] Run: crio --version
	I0827 23:16:23.440669   57186 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0827 23:16:23.441870   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:16:23.446252   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:23.446854   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:16:11 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:16:23.446882   57186 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:16:23.447204   57186 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0827 23:16:23.451622   57186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:16:23.464123   57186 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:16:23.464223   57186 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 23:16:23.464264   57186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:16:23.500248   57186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0827 23:16:23.500320   57186 ssh_runner.go:195] Run: which lz4
	I0827 23:16:23.504240   57186 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0827 23:16:23.508190   57186 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0827 23:16:23.508216   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0827 23:16:24.978284   57186 crio.go:462] duration metric: took 1.4740775s to copy over tarball
	I0827 23:16:24.978349   57186 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0827 23:16:27.489719   57186 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.511338983s)
	I0827 23:16:27.489745   57186 crio.go:469] duration metric: took 2.511438446s to extract the tarball
	I0827 23:16:27.489753   57186 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0827 23:16:27.531211   57186 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:16:27.575693   57186 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0827 23:16:27.575721   57186 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0827 23:16:27.575801   57186 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:16:27.575844   57186 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:27.575854   57186 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:27.575884   57186 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0827 23:16:27.575887   57186 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0827 23:16:27.575822   57186 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:27.575822   57186 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:27.576172   57186 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:27.577134   57186 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:16:27.577463   57186 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0827 23:16:27.577474   57186 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:27.577538   57186 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:27.577570   57186 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:27.577654   57186 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0827 23:16:27.577732   57186 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:27.577780   57186 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:27.781662   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0827 23:16:27.823852   57186 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0827 23:16:27.823896   57186 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0827 23:16:27.823950   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:27.827827   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0827 23:16:27.858926   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:27.859685   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0827 23:16:27.876692   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:27.882305   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:27.884006   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:27.900683   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0827 23:16:27.943487   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0827 23:16:27.943533   57186 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0827 23:16:27.943568   57186 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:27.943625   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:27.948658   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:27.990421   57186 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0827 23:16:27.990464   57186 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:27.990478   57186 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0827 23:16:27.990508   57186 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:27.990513   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:27.990590   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:28.037490   57186 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0827 23:16:28.037537   57186 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0827 23:16:28.037548   57186 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0827 23:16:28.037565   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:28.037579   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:28.037600   57186 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:28.037640   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:28.061004   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0827 23:16:28.061078   57186 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0827 23:16:28.061116   57186 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:28.061146   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:28.061163   57186 ssh_runner.go:195] Run: which crictl
	I0827 23:16:28.061181   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:28.106581   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:28.106689   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0827 23:16:28.106737   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:28.116578   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:28.161537   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:28.161762   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:28.240851   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0827 23:16:28.240930   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:28.240979   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0827 23:16:28.241018   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0827 23:16:28.241087   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:28.266860   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0827 23:16:28.389717   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0827 23:16:28.389748   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0827 23:16:28.389800   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0827 23:16:28.389861   57186 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0827 23:16:28.389865   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0827 23:16:28.391094   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0827 23:16:28.448813   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0827 23:16:28.448823   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0827 23:16:28.454688   57186 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0827 23:16:28.675266   57186 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:16:28.820184   57186 cache_images.go:92] duration metric: took 1.244443216s to LoadCachedImages
	W0827 23:16:28.820316   57186 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19522-7571/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0827 23:16:28.820335   57186 kubeadm.go:934] updating node { 192.168.83.89 8443 v1.20.0 crio true true} ...
	I0827 23:16:28.820497   57186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-772694 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:16:28.820595   57186 ssh_runner.go:195] Run: crio config
	I0827 23:16:28.868013   57186 cni.go:84] Creating CNI manager for ""
	I0827 23:16:28.868039   57186 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:16:28.868050   57186 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:16:28.868074   57186 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-772694 NodeName:kubernetes-upgrade-772694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0827 23:16:28.868261   57186 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-772694"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:16:28.868333   57186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0827 23:16:28.877953   57186 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:16:28.878028   57186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:16:28.888251   57186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0827 23:16:28.906498   57186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:16:28.922791   57186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0827 23:16:28.938914   57186 ssh_runner.go:195] Run: grep 192.168.83.89	control-plane.minikube.internal$ /etc/hosts
	I0827 23:16:28.942727   57186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:16:28.957529   57186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:16:29.078665   57186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:16:29.096174   57186 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694 for IP: 192.168.83.89
	I0827 23:16:29.096201   57186 certs.go:194] generating shared ca certs ...
	I0827 23:16:29.096217   57186 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.096391   57186 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 23:16:29.096450   57186 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 23:16:29.096489   57186 certs.go:256] generating profile certs ...
	I0827 23:16:29.096557   57186 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.key
	I0827 23:16:29.096586   57186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.crt with IP's: []
	I0827 23:16:29.207344   57186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.crt ...
	I0827 23:16:29.207383   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.crt: {Name:mk61f918b33d7c6e9c584605261c2cdc6157d626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.207601   57186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.key ...
	I0827 23:16:29.207624   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.key: {Name:mkf4838ceadf52faaa7edeb3f06f618bfd401b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.207779   57186 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key.9a38dc4c
	I0827 23:16:29.207804   57186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt.9a38dc4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.89]
	I0827 23:16:29.488012   57186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt.9a38dc4c ...
	I0827 23:16:29.488046   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt.9a38dc4c: {Name:mk4de4ef241c5d2b52a481f6f25fef8a22565878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.488226   57186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key.9a38dc4c ...
	I0827 23:16:29.488243   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key.9a38dc4c: {Name:mk7f7175a725f45a385904591c1f49030f4351e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.488336   57186 certs.go:381] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt.9a38dc4c -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt
	I0827 23:16:29.488434   57186 certs.go:385] copying /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key.9a38dc4c -> /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key
	I0827 23:16:29.488555   57186 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key
	I0827 23:16:29.488580   57186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.crt with IP's: []
	I0827 23:16:29.622163   57186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.crt ...
	I0827 23:16:29.622194   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.crt: {Name:mkf5d40b4d597dbd6916b5a95aad40301c36f41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.622378   57186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key ...
	I0827 23:16:29.622397   57186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key: {Name:mk6b899f98a38c2d4e6fcde51e84e23634b08c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:16:29.622614   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 23:16:29.622660   57186 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 23:16:29.622676   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:16:29.622706   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 23:16:29.622738   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:16:29.622774   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 23:16:29.622829   57186 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:16:29.623381   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:16:29.648616   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 23:16:29.672418   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:16:29.696416   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:16:29.718831   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0827 23:16:29.746022   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 23:16:29.771883   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:16:29.796280   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:16:29.823319   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:16:29.850522   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 23:16:29.872561   57186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 23:16:29.894891   57186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:16:29.914924   57186 ssh_runner.go:195] Run: openssl version
	I0827 23:16:29.925401   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:16:29.936755   57186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:16:29.941262   57186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:16:29.941324   57186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:16:29.948813   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:16:29.965026   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 23:16:29.982572   57186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 23:16:29.989699   57186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 23:16:29.989770   57186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 23:16:29.997533   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 23:16:30.008597   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 23:16:30.020508   57186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 23:16:30.025273   57186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 23:16:30.025357   57186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 23:16:30.031160   57186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:16:30.042877   57186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:16:30.048319   57186 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 23:16:30.048408   57186 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:16:30.048527   57186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 23:16:30.048595   57186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:16:30.083329   57186 cri.go:89] found id: ""
	I0827 23:16:30.083403   57186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 23:16:30.093147   57186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 23:16:30.102601   57186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:16:30.116185   57186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:16:30.116207   57186 kubeadm.go:157] found existing configuration files:
	
	I0827 23:16:30.116261   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:16:30.128401   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:16:30.128499   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:16:30.141162   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:16:30.153142   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:16:30.153228   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:16:30.165995   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:16:30.178060   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:16:30.178196   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:16:30.190913   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:16:30.200897   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:16:30.200974   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:16:30.213869   57186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 23:16:30.502340   57186 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 23:18:28.703524   57186 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0827 23:18:28.703638   57186 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0827 23:18:28.705497   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:28.705571   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:28.705694   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:28.705809   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:28.705934   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:28.705999   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:28.707739   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:28.707823   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:28.707896   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:28.707970   57186 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 23:18:28.708018   57186 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 23:18:28.708093   57186 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 23:18:28.708141   57186 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 23:18:28.708228   57186 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 23:18:28.708405   57186 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708498   57186 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 23:18:28.708676   57186 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708801   57186 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 23:18:28.708907   57186 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 23:18:28.708973   57186 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 23:18:28.709048   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:28.709116   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:28.709179   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:28.709247   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:28.709319   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:28.709449   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:28.709579   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:28.709632   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:28.709707   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:28.711322   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:28.711397   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:28.711465   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:28.711523   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:28.711593   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:28.711779   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:18:28.711847   57186 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0827 23:18:28.711926   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712158   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712221   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712370   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712442   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712643   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712710   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712912   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712987   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.713213   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.713256   57186 kubeadm.go:310] 
	I0827 23:18:28.713308   57186 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0827 23:18:28.713364   57186 kubeadm.go:310] 		timed out waiting for the condition
	I0827 23:18:28.713377   57186 kubeadm.go:310] 
	I0827 23:18:28.713433   57186 kubeadm.go:310] 	This error is likely caused by:
	I0827 23:18:28.713494   57186 kubeadm.go:310] 		- The kubelet is not running
	I0827 23:18:28.713606   57186 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0827 23:18:28.713616   57186 kubeadm.go:310] 
	I0827 23:18:28.713724   57186 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0827 23:18:28.713768   57186 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0827 23:18:28.713821   57186 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0827 23:18:28.713832   57186 kubeadm.go:310] 
	I0827 23:18:28.714013   57186 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0827 23:18:28.714104   57186 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0827 23:18:28.714126   57186 kubeadm.go:310] 
	I0827 23:18:28.714244   57186 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0827 23:18:28.714343   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0827 23:18:28.714430   57186 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0827 23:18:28.714511   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0827 23:18:28.714535   57186 kubeadm.go:310] 
	W0827 23:18:28.714652   57186 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0827 23:18:28.714696   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0827 23:18:29.898172   57186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.183448402s)
	I0827 23:18:29.898262   57186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:18:29.912178   57186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:18:29.921912   57186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:18:29.921942   57186 kubeadm.go:157] found existing configuration files:
	
	I0827 23:18:29.921995   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:18:29.931061   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:18:29.931129   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:18:29.940344   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:18:29.949652   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:18:29.949722   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:18:29.960201   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.969231   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:18:29.969282   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.978533   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:18:29.988747   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:18:29.988810   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:18:29.999648   57186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 23:18:30.063431   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:30.063498   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:30.207367   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:30.207495   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:30.207594   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:30.407332   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:30.409199   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:30.409316   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:30.409433   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:30.409544   57186 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 23:18:30.409633   57186 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 23:18:30.409829   57186 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 23:18:30.409940   57186 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 23:18:30.410064   57186 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 23:18:30.410165   57186 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 23:18:30.410233   57186 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 23:18:30.410330   57186 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 23:18:30.410404   57186 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 23:18:30.410512   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:30.487640   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:30.697348   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:30.969478   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:31.147504   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:31.166119   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:31.167645   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:31.167719   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:31.310340   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:31.312219   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:31.312363   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:31.317236   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:31.318847   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:31.319995   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:31.322520   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:19:11.325381   57186 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0827 23:19:11.325841   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:19:11.326018   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:19:16.326779   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:19:16.326995   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:19:26.327755   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:19:26.328080   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:19:46.326912   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:19:46.327139   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:20:26.326691   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:20:26.327012   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:20:26.327035   57186 kubeadm.go:310] 
	I0827 23:20:26.327104   57186 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0827 23:20:26.327173   57186 kubeadm.go:310] 		timed out waiting for the condition
	I0827 23:20:26.327189   57186 kubeadm.go:310] 
	I0827 23:20:26.327230   57186 kubeadm.go:310] 	This error is likely caused by:
	I0827 23:20:26.327273   57186 kubeadm.go:310] 		- The kubelet is not running
	I0827 23:20:26.327412   57186 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0827 23:20:26.327423   57186 kubeadm.go:310] 
	I0827 23:20:26.327515   57186 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0827 23:20:26.327561   57186 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0827 23:20:26.327606   57186 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0827 23:20:26.327625   57186 kubeadm.go:310] 
	I0827 23:20:26.327743   57186 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0827 23:20:26.327826   57186 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0827 23:20:26.327834   57186 kubeadm.go:310] 
	I0827 23:20:26.327975   57186 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0827 23:20:26.328125   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0827 23:20:26.328233   57186 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0827 23:20:26.328338   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0827 23:20:26.328366   57186 kubeadm.go:310] 
	I0827 23:20:26.329312   57186 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 23:20:26.329442   57186 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0827 23:20:26.329533   57186 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0827 23:20:26.329611   57186 kubeadm.go:394] duration metric: took 3m56.281207978s to StartCluster
	I0827 23:20:26.329655   57186 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:20:26.329716   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:20:26.380410   57186 cri.go:89] found id: ""
	I0827 23:20:26.380450   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.380458   57186 logs.go:278] No container was found matching "kube-apiserver"
	I0827 23:20:26.380485   57186 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0827 23:20:26.380540   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:20:26.413554   57186 cri.go:89] found id: ""
	I0827 23:20:26.413577   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.413584   57186 logs.go:278] No container was found matching "etcd"
	I0827 23:20:26.413589   57186 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0827 23:20:26.413646   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:20:26.447321   57186 cri.go:89] found id: ""
	I0827 23:20:26.447351   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.447358   57186 logs.go:278] No container was found matching "coredns"
	I0827 23:20:26.447363   57186 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:20:26.447413   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:20:26.483173   57186 cri.go:89] found id: ""
	I0827 23:20:26.483198   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.483205   57186 logs.go:278] No container was found matching "kube-scheduler"
	I0827 23:20:26.483213   57186 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:20:26.483274   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:20:26.515982   57186 cri.go:89] found id: ""
	I0827 23:20:26.516017   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.516030   57186 logs.go:278] No container was found matching "kube-proxy"
	I0827 23:20:26.516040   57186 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:20:26.516109   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:20:26.547067   57186 cri.go:89] found id: ""
	I0827 23:20:26.547098   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.547109   57186 logs.go:278] No container was found matching "kube-controller-manager"
	I0827 23:20:26.547117   57186 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0827 23:20:26.547171   57186 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:20:26.579711   57186 cri.go:89] found id: ""
	I0827 23:20:26.579743   57186 logs.go:276] 0 containers: []
	W0827 23:20:26.579751   57186 logs.go:278] No container was found matching "kindnet"
	I0827 23:20:26.579760   57186 logs.go:123] Gathering logs for CRI-O ...
	I0827 23:20:26.579772   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0827 23:20:26.688892   57186 logs.go:123] Gathering logs for container status ...
	I0827 23:20:26.688930   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:20:26.726942   57186 logs.go:123] Gathering logs for kubelet ...
	I0827 23:20:26.726976   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 23:20:26.780159   57186 logs.go:123] Gathering logs for dmesg ...
	I0827 23:20:26.780192   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:20:26.793276   57186 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:20:26.793302   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0827 23:20:26.910841   57186 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0827 23:20:26.910873   57186 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0827 23:20:26.910922   57186 out.go:270] * 
	* 
	W0827 23:20:26.910980   57186 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0827 23:20:26.911005   57186 out.go:270] * 
	* 
	W0827 23:20:26.912111   57186 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 23:20:26.915314   57186 out.go:201] 
	W0827 23:20:26.916416   57186 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0827 23:20:26.916451   57186 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0827 23:20:26.916494   57186 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0827 23:20:26.917834   57186 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-772694
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-772694: (6.302810624s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-772694 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-772694 status --format={{.Host}}: exit status 7 (65.351102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (37.280796284s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-772694 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.189516ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-772694] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-772694
	    minikube start -p kubernetes-upgrade-772694 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7726942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-772694 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0827 23:21:21.248579   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-772694 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.354752896s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-27 23:21:24.115379443 +0000 UTC m=+6240.832943255
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-772694 -n kubernetes-upgrade-772694
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-772694 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-772694 logs -n 25: (1.275301611s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | cri-dockerd --version                                 |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status containerd                           |                              |         |         |                     |                     |
	|         | --all --full --no-pager                               |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat containerd                              |                              |         |         |                     |                     |
	|         | --no-pager                                            |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                             | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /lib/systemd/system/containerd.service                |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                             | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/containerd/config.toml                           |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | containerd config dump                                |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status crio --all                           |                              |         |         |                     |                     |
	|         | --full --no-pager                                     |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                 | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo find                            | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                              |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo crio                            | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | config                                                |                              |         |         |                     |                     |
	| delete  | -p cilium-409668                                      | cilium-409668                | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	| start   | -p old-k8s-version-686432                             | old-k8s-version-686432       | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --kvm-network=default                                 |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                              |         |         |                     |                     |
	|         | --keep-context=false                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-649861                             | cert-expiration-649861       | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	| delete  | -p                                                    | disable-driver-mounts-461235 | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	|         | disable-driver-mounts-461235                          |                              |         |         |                     |                     |
	| start   | -p no-preload-492655                                  | no-preload-492655            | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:20 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                          |                              |         |         |                     |                     |
	| delete  | -p pause-677405                                       | pause-677405                 | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	| start   | -p                                                    | default-k8s-diff-port-723476 | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:20 UTC |
	|         | default-k8s-diff-port-723476                          |                              |         |         |                     |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                 |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                          |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-772694                          | kubernetes-upgrade-772694    | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC | 27 Aug 24 23:20 UTC |
	| start   | -p kubernetes-upgrade-772694                          | kubernetes-upgrade-772694    | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC | 27 Aug 24 23:21 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-492655            | no-preload-492655            | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC | 27 Aug 24 23:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p no-preload-492655                                  | no-preload-492655            | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-723476 | default-k8s-diff-port-723476 | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC | 27 Aug 24 23:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p                                                    | default-k8s-diff-port-723476 | jenkins | v1.33.1 | 27 Aug 24 23:20 UTC |                     |
	|         | default-k8s-diff-port-723476                          |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-772694                          | kubernetes-upgrade-772694    | jenkins | v1.33.1 | 27 Aug 24 23:21 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                              |         |         |                     |                     |
	|         | --driver=kvm2                                         |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-772694                          | kubernetes-upgrade-772694    | jenkins | v1.33.1 | 27 Aug 24 23:21 UTC | 27 Aug 24 23:21 UTC |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                     |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:21:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:21:10.797172   63496 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:21:10.797263   63496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:21:10.797271   63496 out.go:358] Setting ErrFile to fd 2...
	I0827 23:21:10.797275   63496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:21:10.797441   63496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:21:10.797985   63496 out.go:352] Setting JSON to false
	I0827 23:21:10.798898   63496 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7418,"bootTime":1724793453,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:21:10.798948   63496 start.go:139] virtualization: kvm guest
	I0827 23:21:10.801150   63496 out.go:177] * [kubernetes-upgrade-772694] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:21:10.802461   63496 notify.go:220] Checking for updates...
	I0827 23:21:10.802503   63496 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:21:10.803776   63496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:21:10.805020   63496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:21:10.806200   63496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:21:10.807302   63496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:21:10.808375   63496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:21:10.809944   63496 config.go:182] Loaded profile config "kubernetes-upgrade-772694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:21:10.810520   63496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:21:10.810571   63496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:21:10.825233   63496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0827 23:21:10.825577   63496 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:21:10.826057   63496 main.go:141] libmachine: Using API Version  1
	I0827 23:21:10.826088   63496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:21:10.826409   63496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:21:10.826593   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:10.826829   63496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:21:10.827113   63496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:21:10.827147   63496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:21:10.841765   63496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0827 23:21:10.842230   63496 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:21:10.842797   63496 main.go:141] libmachine: Using API Version  1
	I0827 23:21:10.842825   63496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:21:10.843111   63496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:21:10.843283   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:10.883786   63496 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 23:21:10.884874   63496 start.go:297] selected driver: kvm2
	I0827 23:21:10.884890   63496 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:21:10.884991   63496 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:21:10.885626   63496 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:21:10.885686   63496 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:21:10.900791   63496 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:21:10.901312   63496 cni.go:84] Creating CNI manager for ""
	I0827 23:21:10.901338   63496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:21:10.901399   63496 start.go:340] cluster config:
	{Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-772694 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:21:10.901547   63496 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:21:10.903826   63496 out.go:177] * Starting "kubernetes-upgrade-772694" primary control-plane node in "kubernetes-upgrade-772694" cluster
	I0827 23:21:10.905497   63496 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 23:21:10.905547   63496 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 23:21:10.905557   63496 cache.go:56] Caching tarball of preloaded images
	I0827 23:21:10.905643   63496 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 23:21:10.905657   63496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 23:21:10.905787   63496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/config.json ...
	I0827 23:21:10.905982   63496 start.go:360] acquireMachinesLock for kubernetes-upgrade-772694: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:21:10.906029   63496 start.go:364] duration metric: took 26.613µs to acquireMachinesLock for "kubernetes-upgrade-772694"
	I0827 23:21:10.906053   63496 start.go:96] Skipping create...Using existing machine configuration
	I0827 23:21:10.906060   63496 fix.go:54] fixHost starting: 
	I0827 23:21:10.906318   63496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:21:10.906356   63496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:21:10.922394   63496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0827 23:21:10.922920   63496 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:21:10.923465   63496 main.go:141] libmachine: Using API Version  1
	I0827 23:21:10.923498   63496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:21:10.923853   63496 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:21:10.924049   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:10.924199   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetState
	I0827 23:21:10.925997   63496 fix.go:112] recreateIfNeeded on kubernetes-upgrade-772694: state=Running err=<nil>
	W0827 23:21:10.926019   63496 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 23:21:10.927430   63496 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-772694" VM ...
	I0827 23:21:10.928982   63496 machine.go:93] provisionDockerMachine start ...
	I0827 23:21:10.929011   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:10.929200   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:10.932247   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:10.932809   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:10.932840   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:10.933061   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:10.933238   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:10.933420   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:10.933569   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:10.933775   63496 main.go:141] libmachine: Using SSH client type: native
	I0827 23:21:10.933973   63496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:21:10.933986   63496 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:21:11.044528   63496 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-772694
	
	I0827 23:21:11.044566   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:21:11.044836   63496 buildroot.go:166] provisioning hostname "kubernetes-upgrade-772694"
	I0827 23:21:11.044865   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:21:11.045067   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:11.048053   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.048493   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.048524   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.048689   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:11.048895   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.049077   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.049216   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:11.049409   63496 main.go:141] libmachine: Using SSH client type: native
	I0827 23:21:11.049653   63496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:21:11.049675   63496 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-772694 && echo "kubernetes-upgrade-772694" | sudo tee /etc/hostname
	I0827 23:21:11.173429   63496 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-772694
	
	I0827 23:21:11.173460   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:11.176538   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.176901   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.176923   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.177069   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:11.177263   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.177436   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.177581   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:11.177745   63496 main.go:141] libmachine: Using SSH client type: native
	I0827 23:21:11.177959   63496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:21:11.177984   63496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-772694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-772694/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-772694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:21:11.280838   63496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:21:11.280865   63496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 23:21:11.280887   63496 buildroot.go:174] setting up certificates
	I0827 23:21:11.280897   63496 provision.go:84] configureAuth start
	I0827 23:21:11.280908   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetMachineName
	I0827 23:21:11.281236   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:21:11.283927   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.284286   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.284322   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.284506   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:11.286926   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.287330   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.287358   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.287476   63496 provision.go:143] copyHostCerts
	I0827 23:21:11.287543   63496 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 23:21:11.287559   63496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 23:21:11.287625   63496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 23:21:11.287714   63496 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 23:21:11.287723   63496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 23:21:11.287743   63496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 23:21:11.287832   63496 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 23:21:11.287840   63496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 23:21:11.287866   63496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 23:21:11.287911   63496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-772694 san=[127.0.0.1 192.168.83.89 kubernetes-upgrade-772694 localhost minikube]
	I0827 23:21:11.505421   63496 provision.go:177] copyRemoteCerts
	I0827 23:21:11.505493   63496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:21:11.505521   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:11.508650   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.509057   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.509089   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.509303   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:11.509505   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.509718   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:11.509924   63496 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:21:11.600246   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 23:21:11.630633   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0827 23:21:11.659214   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:21:11.682780   63496 provision.go:87] duration metric: took 401.870343ms to configureAuth
	I0827 23:21:11.682813   63496 buildroot.go:189] setting minikube options for container-runtime
	I0827 23:21:11.682993   63496 config.go:182] Loaded profile config "kubernetes-upgrade-772694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:21:11.683068   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:11.686299   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.686860   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:11.686891   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:11.687073   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:11.687295   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.687500   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:11.687667   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:11.687902   63496 main.go:141] libmachine: Using SSH client type: native
	I0827 23:21:11.688126   63496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:21:11.688152   63496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 23:21:12.485385   63496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 23:21:12.485411   63496 machine.go:96] duration metric: took 1.556406944s to provisionDockerMachine
	I0827 23:21:12.485424   63496 start.go:293] postStartSetup for "kubernetes-upgrade-772694" (driver="kvm2")
	I0827 23:21:12.485461   63496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:21:12.485479   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:12.485818   63496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:21:12.485852   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:12.488996   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.489351   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:12.489376   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.489565   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:12.489766   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:12.489924   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:12.490086   63496 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:21:12.570564   63496 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:21:12.574312   63496 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 23:21:12.574336   63496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 23:21:12.574392   63496 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 23:21:12.574460   63496 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 23:21:12.574540   63496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:21:12.582826   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:21:12.604529   63496 start.go:296] duration metric: took 119.091506ms for postStartSetup
	I0827 23:21:12.604566   63496 fix.go:56] duration metric: took 1.698505559s for fixHost
	I0827 23:21:12.604590   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:12.607141   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.607485   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:12.607513   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.607637   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:12.607808   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:12.607945   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:12.608081   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:12.608228   63496 main.go:141] libmachine: Using SSH client type: native
	I0827 23:21:12.608434   63496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.89 22 <nil> <nil>}
	I0827 23:21:12.608447   63496 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 23:21:12.712851   63496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724800872.702922762
	
	I0827 23:21:12.712873   63496 fix.go:216] guest clock: 1724800872.702922762
	I0827 23:21:12.712882   63496 fix.go:229] Guest: 2024-08-27 23:21:12.702922762 +0000 UTC Remote: 2024-08-27 23:21:12.604570862 +0000 UTC m=+1.841031303 (delta=98.3519ms)
	I0827 23:21:12.712903   63496 fix.go:200] guest clock delta is within tolerance: 98.3519ms
	I0827 23:21:12.712925   63496 start.go:83] releasing machines lock for "kubernetes-upgrade-772694", held for 1.806868746s
	I0827 23:21:12.712952   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:12.713215   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:21:12.715754   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.716065   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:12.716086   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.716263   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:12.716717   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:12.716885   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .DriverName
	I0827 23:21:12.716983   63496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:21:12.717030   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:12.717141   63496 ssh_runner.go:195] Run: cat /version.json
	I0827 23:21:12.717162   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHHostname
	I0827 23:21:12.719587   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.719764   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.720005   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:12.720031   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.720101   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:12.720125   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:12.720139   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:12.720324   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHPort
	I0827 23:21:12.720346   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:12.720556   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHKeyPath
	I0827 23:21:12.720579   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:12.720714   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetSSHUsername
	I0827 23:21:12.720843   63496 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:21:12.720843   63496 sshutil.go:53] new ssh client: &{IP:192.168.83.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/kubernetes-upgrade-772694/id_rsa Username:docker}
	I0827 23:21:12.830944   63496 ssh_runner.go:195] Run: systemctl --version
	I0827 23:21:12.840510   63496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 23:21:12.990399   63496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 23:21:13.021275   63496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 23:21:13.021344   63496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:21:13.077137   63496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 23:21:13.077164   63496 start.go:495] detecting cgroup driver to use...
	I0827 23:21:13.077238   63496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 23:21:13.103975   63496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 23:21:13.144930   63496 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:21:13.145005   63496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:21:13.190240   63496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:21:13.249022   63496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:21:13.501103   63496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:21:13.707334   63496 docker.go:233] disabling docker service ...
	I0827 23:21:13.707446   63496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:21:13.732428   63496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:21:13.754539   63496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:21:13.931705   63496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:21:14.114665   63496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:21:14.134569   63496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:21:14.159005   63496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 23:21:14.159119   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.173015   63496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 23:21:14.173086   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.187295   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.198958   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.211274   63496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:21:14.222698   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.232846   63496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.243146   63496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:21:14.254399   63496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:21:14.269291   63496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:21:14.282899   63496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:21:14.449395   63496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 23:21:15.326761   63496 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 23:21:15.326838   63496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 23:21:15.331782   63496 start.go:563] Will wait 60s for crictl version
	I0827 23:21:15.331839   63496 ssh_runner.go:195] Run: which crictl
	I0827 23:21:15.335825   63496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:21:15.377317   63496 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 23:21:15.377406   63496 ssh_runner.go:195] Run: crio --version
	I0827 23:21:15.408449   63496 ssh_runner.go:195] Run: crio --version
	I0827 23:21:15.436453   63496 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 23:21:15.437587   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) Calling .GetIP
	I0827 23:21:15.440322   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:15.440655   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:42:dc", ip: ""} in network mk-kubernetes-upgrade-772694: {Iface:virbr4 ExpiryTime:2024-08-28 00:20:44 +0000 UTC Type:0 Mac:52:54:00:0b:42:dc Iaid: IPaddr:192.168.83.89 Prefix:24 Hostname:kubernetes-upgrade-772694 Clientid:01:52:54:00:0b:42:dc}
	I0827 23:21:15.440680   63496 main.go:141] libmachine: (kubernetes-upgrade-772694) DBG | domain kubernetes-upgrade-772694 has defined IP address 192.168.83.89 and MAC address 52:54:00:0b:42:dc in network mk-kubernetes-upgrade-772694
	I0827 23:21:15.440885   63496 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0827 23:21:15.444908   63496 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:21:15.445007   63496 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 23:21:15.445053   63496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:21:15.483778   63496 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 23:21:15.483799   63496 crio.go:433] Images already preloaded, skipping extraction
	I0827 23:21:15.483851   63496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:21:15.517896   63496 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 23:21:15.517919   63496 cache_images.go:84] Images are preloaded, skipping loading
	I0827 23:21:15.517927   63496 kubeadm.go:934] updating node { 192.168.83.89 8443 v1.31.0 crio true true} ...
	I0827 23:21:15.518047   63496 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-772694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:21:15.518127   63496 ssh_runner.go:195] Run: crio config
	I0827 23:21:15.565720   63496 cni.go:84] Creating CNI manager for ""
	I0827 23:21:15.565742   63496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:21:15.565754   63496 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:21:15.565784   63496 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.89 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-772694 NodeName:kubernetes-upgrade-772694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 23:21:15.565946   63496 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-772694"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:21:15.566027   63496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 23:21:15.575894   63496 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:21:15.575949   63496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:21:15.584638   63496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0827 23:21:15.600202   63496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:21:15.614801   63496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0827 23:21:15.629447   63496 ssh_runner.go:195] Run: grep 192.168.83.89	control-plane.minikube.internal$ /etc/hosts
	I0827 23:21:15.633042   63496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:21:15.763131   63496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:21:15.776937   63496 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694 for IP: 192.168.83.89
	I0827 23:21:15.776961   63496 certs.go:194] generating shared ca certs ...
	I0827 23:21:15.776977   63496 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:21:15.777134   63496 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 23:21:15.777173   63496 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 23:21:15.777181   63496 certs.go:256] generating profile certs ...
	I0827 23:21:15.777256   63496 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/client.key
	I0827 23:21:15.777301   63496 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key.9a38dc4c
	I0827 23:21:15.777348   63496 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key
	I0827 23:21:15.777459   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 23:21:15.777487   63496 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 23:21:15.777504   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:21:15.777528   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 23:21:15.777551   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:21:15.777575   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 23:21:15.777621   63496 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:21:15.778218   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:21:15.802164   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 23:21:15.824249   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:21:15.846629   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:21:15.869669   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0827 23:21:15.892583   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 23:21:15.914110   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:21:15.935873   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/kubernetes-upgrade-772694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:21:15.957603   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 23:21:15.979085   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:21:16.043723   63496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 23:21:16.166235   63496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:21:16.198855   63496 ssh_runner.go:195] Run: openssl version
	I0827 23:21:16.222767   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 23:21:16.235071   63496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 23:21:16.239850   63496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 23:21:16.239899   63496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 23:21:16.245487   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:21:16.260000   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:21:16.288342   63496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:21:16.297609   63496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:21:16.297666   63496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:21:16.306187   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:21:16.317276   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 23:21:16.330101   63496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 23:21:16.334508   63496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 23:21:16.334564   63496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 23:21:16.339897   63496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 23:21:16.349627   63496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:21:16.353783   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 23:21:16.359097   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 23:21:16.364302   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 23:21:16.369873   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 23:21:16.375205   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 23:21:16.380188   63496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 23:21:16.385359   63496 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-772694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-772694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.89 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:21:16.385449   63496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 23:21:16.385511   63496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:21:16.420425   63496 cri.go:89] found id: "8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f"
	I0827 23:21:16.420453   63496 cri.go:89] found id: "2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f"
	I0827 23:21:16.420459   63496 cri.go:89] found id: "fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779"
	I0827 23:21:16.420482   63496 cri.go:89] found id: "7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4"
	I0827 23:21:16.420487   63496 cri.go:89] found id: ""
	I0827 23:21:16.420538   63496 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.783078720Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:462c551630bde02e36cf62478621633ddbc90aedb0b4533c21a6bcc0c5a240ed,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nrnb6,Uid:736fad6a-7b14-4bb1-82db-f5a468328b50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724800881958819237,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nrnb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 736fad6a-7b14-4bb1-82db-f5a468328b50,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-27T23:21:21.623273485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8c222ca8bf40813f1fd95b72c7fec79e04df7e0de57062de1b34cc998542735d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8hjbg,Uid:8fa1911b-fb7f-4cde-a87c-b8db705e8f07,Namespac
e:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724800881956137252,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8hjbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fa1911b-fb7f-4cde-a87c-b8db705e8f07,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-27T23:21:21.623265610Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b0d54fbbdb3a739cc861d27f24dc728778b2346f86b4ddb8bad72f2024608d9,Metadata:&PodSandboxMetadata{Name:kube-proxy-k7hcj,Uid:13bb9359-3abb-4f60-ac48-4dfebb11bbe2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724800881936954107,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k7hcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13bb9359-3abb-4f60-ac48-4dfebb11bbe2,k8s-app: kube-proxy,pod-template-generation: 1,},Annot
ations:map[string]string{kubernetes.io/config.seen: 2024-08-27T23:21:21.623277970Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4327749f22e2b8d9910051ad91a2307ebe3a899d4bd7c63b156cdb1bf2084c81,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:288512ee-ecfa-425f-bf91-5535b5854a43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724800881935095055,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288512ee-ecfa-425f-bf91-5535b5854a43,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"conta
iners\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-27T23:21:21.623281153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e878c58f47622192b13fb8a95fe7b46288132760dd320d0f0381ef1275b0492e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-772694,Uid:76892a907cddd4905c980e850e7f2a3c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724800876053708147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e8
50e7f2a3c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76892a907cddd4905c980e850e7f2a3c,kubernetes.io/config.seen: 2024-08-27T23:21:00.745486191Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cc6ccb893b8d4ee7450be872e84fab7103747c4cd312c553b20764a851648d7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-772694,Uid:f2251791a00b5a94fcf4cfd9bdaad892,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724800876048542779,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2251791a00b5a94fcf4cfd9bdaad892,kubernetes.io/config.seen: 2024-08-27T23:21:00.745481567Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7
cae0dcc707f1d6465b7df4ec621a05222c1a955a2b21e12570a13f42467ea88,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-772694,Uid:606a40ac6da8a585c27f10c07a4c7004,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724800876038969847,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.89:8443,kubernetes.io/config.hash: 606a40ac6da8a585c27f10c07a4c7004,kubernetes.io/config.seen: 2024-08-27T23:21:00.745487911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4fb0c8cda4845657a009008b9d58b68699517984f50f094b345eb052c0d2c693,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-772694,Uid:b9094a91a5905084912764971d916bb6,Namespace:kube-system,Attem
pt:2,},State:SANDBOX_READY,CreatedAt:1724800876032680862,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.89:2379,kubernetes.io/config.hash: b9094a91a5905084912764971d916bb6,kubernetes.io/config.seen: 2024-08-27T23:21:00.780206821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd8ca8baac435bcef5d498891a901500b6fcb83bacdcf4cdb620043ff9e81cb6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-772694,Uid:606a40ac6da8a585c27f10c07a4c7004,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724800872990911205,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.89:8443,kubernetes.io/config.hash: 606a40ac6da8a585c27f10c07a4c7004,kubernetes.io/config.seen: 2024-08-27T23:21:00.745487911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d1384f877cdb27084f1c6bb477d47bb29ced5a5896a784d375c2d6083bf0e22,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-772694,Uid:76892a907cddd4905c980e850e7f2a3c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724800872989505767,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 76892a907cddd4905c980e85
0e7f2a3c,kubernetes.io/config.seen: 2024-08-27T23:21:00.745486191Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86e9c0cec787724644e0b29ffd3c5d81fc2ed038165ad701a11dfcb9abc07ba0,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-772694,Uid:b9094a91a5905084912764971d916bb6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724800872988511548,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.89:2379,kubernetes.io/config.hash: b9094a91a5905084912764971d916bb6,kubernetes.io/config.seen: 2024-08-27T23:21:00.780206821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9143e27d890014872d070c026f2456d1e15e612b8729712c0addb48f7307d8e,Metadata:&PodSandboxMetada
ta{Name:kube-controller-manager-kubernetes-upgrade-772694,Uid:f2251791a00b5a94fcf4cfd9bdaad892,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724800872977931404,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2251791a00b5a94fcf4cfd9bdaad892,kubernetes.io/config.seen: 2024-08-27T23:21:00.745481567Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=42955f74-94a1-4bd8-9c06-d07310cdc9e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.783875892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c64dcdd-9c86-42b1-b111-31533ad34d5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.783944112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c64dcdd-9c86-42b1-b111-31533ad34d5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.784228404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504eef9582bc6ffeaef77608dde2553140a704d30a1dec3dec51c45752853f62,PodSandboxId:8c222ca8bf40813f1fd95b72c7fec79e04df7e0de57062de1b34cc998542735d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882588007275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8hjbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fa1911b-fb7f-4cde-a87c-b8db705e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01933169784b99a707dd135ece51a3204ad522b16c81773d984ca887c66247a8,PodSandboxId:462c551630bde02e36cf62478621633ddbc90aedb0b4533c21a6bcc0c5a240ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882463158685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nrnb6,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 736fad6a-7b14-4bb1-82db-f5a468328b50,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f90738f4602ab867017271321278b40313aa8a965720213fa99a4d4c53e835,PodSandboxId:4327749f22e2b8d9910051ad91a2307ebe3a899d4bd7c63b156cdb1bf2084c81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1724800882251571342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288512ee-ecfa-425f-bf91-5535b5854a43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e66101ea208c709c09138ef09bbbcea3499d02ffdc8143fbcee43b26324e2b,PodSandboxId:5b0d54fbbdb3a739cc861d27f24dc728778b2346f86b4ddb8bad72f2024608d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1724800882228544033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7hcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13bb9359-3abb-4f60-ac48-4dfebb11bbe2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f117176adbf53460e167816ba5932c48f9d8494cd0c2ca5082a9291e142c06,PodSandboxId:4fb0c8cda4845657a009008b9d58b68699517984f50f094b345eb052c0d2c693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800878095455502,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe20871a486b7aa046f6bc005a14c3b5eadb133da37926325bc9e03dbe5101e,PodSandboxId:4cc6ccb893b8d4ee7450be872e84fab7103747c4cd312c553b20764a851648d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800878080725488,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ebca36987c2008541ee1fc4b7318b63a3ffad8579ab24449aaed131f2873f2,PodSandboxId:7cae0dcc707f1d6465b7df4ec621a05222c1a955a2b21e12570a13f42467ea88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800878093117188,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23ed408e2e3bbe04d16181ee359ca8044dbe90c6aafcffa9416f90c6f3a37d9e,PodSandboxId:e878c58f47622192b13fb8a95fe7b46288132760dd320d0f0381ef1275b0492e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800878073813382,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f,PodSandboxId:dd8ca8baac435bcef5d498891a901500b6fcb83bacdcf4cdb620043ff9e81cb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800873233925623,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779,PodSandboxId:86e9c0cec787724644e0b29ffd3c5d81fc2ed038165ad701a11dfcb9abc07ba0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800873213540311,Labels:map[string]string{io.kubernetes.co
ntainer.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f,PodSandboxId:6d1384f877cdb27084f1c6bb477d47bb29ced5a5896a784d375c2d6083bf0e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800873261300721,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4,PodSandboxId:a9143e27d890014872d070c026f2456d1e15e612b8729712c0addb48f7307d8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800873191968833,Labels:map[string]string{io.kubernetes.container.name: kube-cont
roller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c64dcdd-9c86-42b1-b111-31533ad34d5a name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.800028551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65a30490-ad0b-4e34-857c-34f07dcb8a62 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.800126926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65a30490-ad0b-4e34-857c-34f07dcb8a62 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.802053540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54b598c3-ca65-479f-aa6d-54f04d12bbce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.802514143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800884802484973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54b598c3-ca65-479f-aa6d-54f04d12bbce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.802959634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b7ed15e-d65c-487a-ae84-2ff72921319f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.803032753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b7ed15e-d65c-487a-ae84-2ff72921319f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.803368773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504eef9582bc6ffeaef77608dde2553140a704d30a1dec3dec51c45752853f62,PodSandboxId:8c222ca8bf40813f1fd95b72c7fec79e04df7e0de57062de1b34cc998542735d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882588007275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8hjbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fa1911b-fb7f-4cde-a87c-b8db705e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01933169784b99a707dd135ece51a3204ad522b16c81773d984ca887c66247a8,PodSandboxId:462c551630bde02e36cf62478621633ddbc90aedb0b4533c21a6bcc0c5a240ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882463158685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nrnb6,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 736fad6a-7b14-4bb1-82db-f5a468328b50,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f90738f4602ab867017271321278b40313aa8a965720213fa99a4d4c53e835,PodSandboxId:4327749f22e2b8d9910051ad91a2307ebe3a899d4bd7c63b156cdb1bf2084c81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1724800882251571342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288512ee-ecfa-425f-bf91-5535b5854a43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e66101ea208c709c09138ef09bbbcea3499d02ffdc8143fbcee43b26324e2b,PodSandboxId:5b0d54fbbdb3a739cc861d27f24dc728778b2346f86b4ddb8bad72f2024608d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1724800882228544033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7hcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13bb9359-3abb-4f60-ac48-4dfebb11bbe2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f117176adbf53460e167816ba5932c48f9d8494cd0c2ca5082a9291e142c06,PodSandboxId:4fb0c8cda4845657a009008b9d58b68699517984f50f094b345eb052c0d2c693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800878095455502,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe20871a486b7aa046f6bc005a14c3b5eadb133da37926325bc9e03dbe5101e,PodSandboxId:4cc6ccb893b8d4ee7450be872e84fab7103747c4cd312c553b20764a851648d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800878080725488,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ebca36987c2008541ee1fc4b7318b63a3ffad8579ab24449aaed131f2873f2,PodSandboxId:7cae0dcc707f1d6465b7df4ec621a05222c1a955a2b21e12570a13f42467ea88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800878093117188,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23ed408e2e3bbe04d16181ee359ca8044dbe90c6aafcffa9416f90c6f3a37d9e,PodSandboxId:e878c58f47622192b13fb8a95fe7b46288132760dd320d0f0381ef1275b0492e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800878073813382,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f,PodSandboxId:dd8ca8baac435bcef5d498891a901500b6fcb83bacdcf4cdb620043ff9e81cb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800873233925623,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779,PodSandboxId:86e9c0cec787724644e0b29ffd3c5d81fc2ed038165ad701a11dfcb9abc07ba0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800873213540311,Labels:map[string]string{io.kubernetes.co
ntainer.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f,PodSandboxId:6d1384f877cdb27084f1c6bb477d47bb29ced5a5896a784d375c2d6083bf0e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800873261300721,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4,PodSandboxId:a9143e27d890014872d070c026f2456d1e15e612b8729712c0addb48f7307d8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800873191968833,Labels:map[string]string{io.kubernetes.container.name: kube-cont
roller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b7ed15e-d65c-487a-ae84-2ff72921319f name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.848644358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c7af27c-b9c6-484a-bf2f-1d9d8cc3e044 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.848727920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c7af27c-b9c6-484a-bf2f-1d9d8cc3e044 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.850049258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae8d493d-5647-4925-a84b-84eab522dc37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.850515566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800884850478869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae8d493d-5647-4925-a84b-84eab522dc37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.851138132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ffe595d-13e1-4ba6-8199-42b2f7ea8753 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.851205537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ffe595d-13e1-4ba6-8199-42b2f7ea8753 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.851542303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504eef9582bc6ffeaef77608dde2553140a704d30a1dec3dec51c45752853f62,PodSandboxId:8c222ca8bf40813f1fd95b72c7fec79e04df7e0de57062de1b34cc998542735d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882588007275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8hjbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fa1911b-fb7f-4cde-a87c-b8db705e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01933169784b99a707dd135ece51a3204ad522b16c81773d984ca887c66247a8,PodSandboxId:462c551630bde02e36cf62478621633ddbc90aedb0b4533c21a6bcc0c5a240ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882463158685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nrnb6,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 736fad6a-7b14-4bb1-82db-f5a468328b50,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f90738f4602ab867017271321278b40313aa8a965720213fa99a4d4c53e835,PodSandboxId:4327749f22e2b8d9910051ad91a2307ebe3a899d4bd7c63b156cdb1bf2084c81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1724800882251571342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288512ee-ecfa-425f-bf91-5535b5854a43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e66101ea208c709c09138ef09bbbcea3499d02ffdc8143fbcee43b26324e2b,PodSandboxId:5b0d54fbbdb3a739cc861d27f24dc728778b2346f86b4ddb8bad72f2024608d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1724800882228544033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7hcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13bb9359-3abb-4f60-ac48-4dfebb11bbe2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f117176adbf53460e167816ba5932c48f9d8494cd0c2ca5082a9291e142c06,PodSandboxId:4fb0c8cda4845657a009008b9d58b68699517984f50f094b345eb052c0d2c693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800878095455502,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe20871a486b7aa046f6bc005a14c3b5eadb133da37926325bc9e03dbe5101e,PodSandboxId:4cc6ccb893b8d4ee7450be872e84fab7103747c4cd312c553b20764a851648d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800878080725488,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ebca36987c2008541ee1fc4b7318b63a3ffad8579ab24449aaed131f2873f2,PodSandboxId:7cae0dcc707f1d6465b7df4ec621a05222c1a955a2b21e12570a13f42467ea88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800878093117188,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23ed408e2e3bbe04d16181ee359ca8044dbe90c6aafcffa9416f90c6f3a37d9e,PodSandboxId:e878c58f47622192b13fb8a95fe7b46288132760dd320d0f0381ef1275b0492e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800878073813382,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f,PodSandboxId:dd8ca8baac435bcef5d498891a901500b6fcb83bacdcf4cdb620043ff9e81cb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800873233925623,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779,PodSandboxId:86e9c0cec787724644e0b29ffd3c5d81fc2ed038165ad701a11dfcb9abc07ba0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800873213540311,Labels:map[string]string{io.kubernetes.co
ntainer.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f,PodSandboxId:6d1384f877cdb27084f1c6bb477d47bb29ced5a5896a784d375c2d6083bf0e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800873261300721,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4,PodSandboxId:a9143e27d890014872d070c026f2456d1e15e612b8729712c0addb48f7307d8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800873191968833,Labels:map[string]string{io.kubernetes.container.name: kube-cont
roller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ffe595d-13e1-4ba6-8199-42b2f7ea8753 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.884863747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d8b8700-a237-428c-9bc4-58287109cb60 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.884948014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d8b8700-a237-428c-9bc4-58287109cb60 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.886801113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17b6cd72-6821-4ede-b4de-d662253c2e69 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.887210908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800884887185689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17b6cd72-6821-4ede-b4de-d662253c2e69 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.887907049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=710d0836-c38c-4081-b46f-f6eb22d8d082 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.887988943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=710d0836-c38c-4081-b46f-f6eb22d8d082 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:21:24 kubernetes-upgrade-772694 crio[1853]: time="2024-08-27 23:21:24.888290669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:504eef9582bc6ffeaef77608dde2553140a704d30a1dec3dec51c45752853f62,PodSandboxId:8c222ca8bf40813f1fd95b72c7fec79e04df7e0de57062de1b34cc998542735d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882588007275,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8hjbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fa1911b-fb7f-4cde-a87c-b8db705e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01933169784b99a707dd135ece51a3204ad522b16c81773d984ca887c66247a8,PodSandboxId:462c551630bde02e36cf62478621633ddbc90aedb0b4533c21a6bcc0c5a240ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800882463158685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nrnb6,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 736fad6a-7b14-4bb1-82db-f5a468328b50,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f90738f4602ab867017271321278b40313aa8a965720213fa99a4d4c53e835,PodSandboxId:4327749f22e2b8d9910051ad91a2307ebe3a899d4bd7c63b156cdb1bf2084c81,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1724800882251571342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 288512ee-ecfa-425f-bf91-5535b5854a43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e66101ea208c709c09138ef09bbbcea3499d02ffdc8143fbcee43b26324e2b,PodSandboxId:5b0d54fbbdb3a739cc861d27f24dc728778b2346f86b4ddb8bad72f2024608d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1724800882228544033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7hcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13bb9359-3abb-4f60-ac48-4dfebb11bbe2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f117176adbf53460e167816ba5932c48f9d8494cd0c2ca5082a9291e142c06,PodSandboxId:4fb0c8cda4845657a009008b9d58b68699517984f50f094b345eb052c0d2c693,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800878095455502,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe20871a486b7aa046f6bc005a14c3b5eadb133da37926325bc9e03dbe5101e,PodSandboxId:4cc6ccb893b8d4ee7450be872e84fab7103747c4cd312c553b20764a851648d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800878080725488,Labels:map[string]string{io
.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ebca36987c2008541ee1fc4b7318b63a3ffad8579ab24449aaed131f2873f2,PodSandboxId:7cae0dcc707f1d6465b7df4ec621a05222c1a955a2b21e12570a13f42467ea88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800878093117188,Labels:map[str
ing]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23ed408e2e3bbe04d16181ee359ca8044dbe90c6aafcffa9416f90c6f3a37d9e,PodSandboxId:e878c58f47622192b13fb8a95fe7b46288132760dd320d0f0381ef1275b0492e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800878073813382,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f,PodSandboxId:dd8ca8baac435bcef5d498891a901500b6fcb83bacdcf4cdb620043ff9e81cb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800873233925623,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 606a40ac6da8a585c27f10c07a4c7004,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779,PodSandboxId:86e9c0cec787724644e0b29ffd3c5d81fc2ed038165ad701a11dfcb9abc07ba0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800873213540311,Labels:map[string]string{io.kubernetes.co
ntainer.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9094a91a5905084912764971d916bb6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f,PodSandboxId:6d1384f877cdb27084f1c6bb477d47bb29ced5a5896a784d375c2d6083bf0e22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800873261300721,Labels:map[string]string{io.kubernetes.container.name: kube-schedul
er,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76892a907cddd4905c980e850e7f2a3c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4,PodSandboxId:a9143e27d890014872d070c026f2456d1e15e612b8729712c0addb48f7307d8e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800873191968833,Labels:map[string]string{io.kubernetes.container.name: kube-cont
roller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-772694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2251791a00b5a94fcf4cfd9bdaad892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=710d0836-c38c-4081-b46f-f6eb22d8d082 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	504eef9582bc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   0                   8c222ca8bf408       coredns-6f6b679f8f-8hjbg
	01933169784b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   0                   462c551630bde       coredns-6f6b679f8f-nrnb6
	a1f90738f4602       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       0                   4327749f22e2b       storage-provisioner
	19e66101ea208       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   2 seconds ago       Running             kube-proxy                0                   5b0d54fbbdb3a       kube-proxy-k7hcj
	68f117176adbf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   6 seconds ago       Running             etcd                      2                   4fb0c8cda4845       etcd-kubernetes-upgrade-772694
	31ebca36987c2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   6 seconds ago       Running             kube-apiserver            2                   7cae0dcc707f1       kube-apiserver-kubernetes-upgrade-772694
	dbe20871a486b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   6 seconds ago       Running             kube-controller-manager   2                   4cc6ccb893b8d       kube-controller-manager-kubernetes-upgrade-772694
	23ed408e2e3bb       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   6 seconds ago       Running             kube-scheduler            2                   e878c58f47622       kube-scheduler-kubernetes-upgrade-772694
	8828e381ec178       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   11 seconds ago      Exited              kube-scheduler            1                   6d1384f877cdb       kube-scheduler-kubernetes-upgrade-772694
	2b455dbee96d0       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   11 seconds ago      Exited              kube-apiserver            1                   dd8ca8baac435       kube-apiserver-kubernetes-upgrade-772694
	fb6498d4ede60       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   11 seconds ago      Exited              etcd                      1                   86e9c0cec7877       etcd-kubernetes-upgrade-772694
	7a0fb7f251a5f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   11 seconds ago      Exited              kube-controller-manager   1                   a9143e27d8900       kube-controller-manager-kubernetes-upgrade-772694
	
	
	==> coredns [01933169784b99a707dd135ece51a3204ad522b16c81773d984ca887c66247a8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> coredns [504eef9582bc6ffeaef77608dde2553140a704d30a1dec3dec51c45752853f62] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-772694
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-772694
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-772694
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:21:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:21:21 +0000   Tue, 27 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:21:21 +0000   Tue, 27 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:21:21 +0000   Tue, 27 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:21:21 +0000   Tue, 27 Aug 2024 23:21:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.89
	  Hostname:    kubernetes-upgrade-772694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d98a51d9223f4fc5b8b6cbb0972df7cf
	  System UUID:                d98a51d9-223f-4fc5-b8b6-cbb0972df7cf
	  Boot ID:                    f03b7284-6e3a-4d2d-843f-09516b0608b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-8hjbg                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14s
	  kube-system                 coredns-6f6b679f8f-nrnb6                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14s
	  kube-system                 etcd-kubernetes-upgrade-772694              100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15s
	  kube-system                 kube-apiserver-kubernetes-upgrade-772694    250m (12%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-proxy-k7hcj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 kube-scheduler-kubernetes-upgrade-772694    100m (5%)     0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                650m (32%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node kubernetes-upgrade-772694 event: Registered Node kubernetes-upgrade-772694 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-772694 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-772694 event: Registered Node kubernetes-upgrade-772694 in Controller
	
	
	==> dmesg <==
	[  +1.852372] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.522600] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.964351] systemd-fstab-generator[551]: Ignoring "noauto" option for root device
	[  +0.061877] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051180] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.187713] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.117548] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.267603] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +3.855374] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +1.865338] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.071125] kauditd_printk_skb: 158 callbacks suppressed
	[Aug27 23:21] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[  +0.086656] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.334569] systemd-fstab-generator[1678]: Ignoring "noauto" option for root device
	[  +0.243453] systemd-fstab-generator[1769]: Ignoring "noauto" option for root device
	[  +0.222670] systemd-fstab-generator[1801]: Ignoring "noauto" option for root device
	[  +0.188973] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.331706] systemd-fstab-generator[1844]: Ignoring "noauto" option for root device
	[  +1.346762] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.071342] kauditd_printk_skb: 201 callbacks suppressed
	[  +1.687006] systemd-fstab-generator[2294]: Ignoring "noauto" option for root device
	[  +4.543545] kauditd_printk_skb: 82 callbacks suppressed
	[  +1.022852] systemd-fstab-generator[3022]: Ignoring "noauto" option for root device
	
	
	==> etcd [68f117176adbf53460e167816ba5932c48f9d8494cd0c2ca5082a9291e142c06] <==
	{"level":"info","ts":"2024-08-27T23:21:18.372850Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:21:18.372992Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"27f2d51692cb99ff","local-member-id":"e7e0b3eb3a838948","added-peer-id":"e7e0b3eb3a838948","added-peer-peer-urls":["https://192.168.83.89:2380"]}
	{"level":"info","ts":"2024-08-27T23:21:18.373139Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"27f2d51692cb99ff","local-member-id":"e7e0b3eb3a838948","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:21:18.373176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:21:18.375272Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T23:21:18.377020Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e7e0b3eb3a838948","initial-advertise-peer-urls":["https://192.168.83.89:2380"],"listen-peer-urls":["https://192.168.83.89:2380"],"advertise-client-urls":["https://192.168.83.89:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.89:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T23:21:18.377116Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:21:18.377219Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.83.89:2380"}
	{"level":"info","ts":"2024-08-27T23:21:18.377289Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.83.89:2380"}
	{"level":"info","ts":"2024-08-27T23:21:19.442437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-27T23:21:19.442542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T23:21:19.442597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 received MsgPreVoteResp from e7e0b3eb3a838948 at term 2"}
	{"level":"info","ts":"2024-08-27T23:21:19.442634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:21:19.442658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 received MsgVoteResp from e7e0b3eb3a838948 at term 3"}
	{"level":"info","ts":"2024-08-27T23:21:19.442685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T23:21:19.442710Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7e0b3eb3a838948 elected leader e7e0b3eb3a838948 at term 3"}
	{"level":"info","ts":"2024-08-27T23:21:19.450592Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e7e0b3eb3a838948","local-member-attributes":"{Name:kubernetes-upgrade-772694 ClientURLs:[https://192.168.83.89:2379]}","request-path":"/0/members/e7e0b3eb3a838948/attributes","cluster-id":"27f2d51692cb99ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:21:19.450788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:21:19.451072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:21:19.451762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:21:19.452538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:21:19.453169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:21:19.453439Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:21:19.453479Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T23:21:19.455928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.89:2379"}
	
	
	==> etcd [fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779] <==
	{"level":"info","ts":"2024-08-27T23:21:13.887888Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-27T23:21:13.898591Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"27f2d51692cb99ff","local-member-id":"e7e0b3eb3a838948","commit-index":348}
	{"level":"info","ts":"2024-08-27T23:21:13.898881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-27T23:21:13.898933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 became follower at term 2"}
	{"level":"info","ts":"2024-08-27T23:21:13.898962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e7e0b3eb3a838948 [peers: [], term: 2, commit: 348, applied: 0, lastindex: 348, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-27T23:21:13.902455Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-27T23:21:13.934936Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":342}
	{"level":"info","ts":"2024-08-27T23:21:13.945355Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-27T23:21:13.978105Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e7e0b3eb3a838948","timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:21:13.978367Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e7e0b3eb3a838948"}
	{"level":"info","ts":"2024-08-27T23:21:13.978459Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"e7e0b3eb3a838948","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-27T23:21:13.978967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:21:13.996624Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-27T23:21:13.999127Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T23:21:14.002629Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e7e0b3eb3a838948","initial-advertise-peer-urls":["https://192.168.83.89:2380"],"listen-peer-urls":["https://192.168.83.89:2380"],"advertise-client-urls":["https://192.168.83.89:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.89:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T23:21:14.002695Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:21:13.999180Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.83.89:2380"}
	{"level":"info","ts":"2024-08-27T23:21:14.002780Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.83.89:2380"}
	{"level":"info","ts":"2024-08-27T23:21:13.999663Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-27T23:21:14.002828Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-27T23:21:14.002855Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-27T23:21:13.999901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7e0b3eb3a838948 switched to configuration voters=(16708552440424925512)"}
	{"level":"info","ts":"2024-08-27T23:21:14.005561Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"27f2d51692cb99ff","local-member-id":"e7e0b3eb3a838948","added-peer-id":"e7e0b3eb3a838948","added-peer-peer-urls":["https://192.168.83.89:2380"]}
	{"level":"info","ts":"2024-08-27T23:21:14.005666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"27f2d51692cb99ff","local-member-id":"e7e0b3eb3a838948","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:21:14.005710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:21:25 up 0 min,  0 users,  load average: 1.50, 0.37, 0.12
	Linux kubernetes-upgrade-772694 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f] <==
	I0827 23:21:13.643757       1 options.go:228] external host was not specified, using 192.168.83.89
	I0827 23:21:13.645723       1 server.go:142] Version: v1.31.0
	I0827 23:21:13.645795       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [31ebca36987c2008541ee1fc4b7318b63a3ffad8579ab24449aaed131f2873f2] <==
	I0827 23:21:20.860279       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 23:21:20.861812       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 23:21:20.864456       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 23:21:20.864516       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 23:21:20.864606       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 23:21:20.869505       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0827 23:21:20.861864       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 23:21:20.870512       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 23:21:20.870611       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 23:21:20.874450       1 aggregator.go:171] initial CRD sync complete...
	I0827 23:21:20.874473       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 23:21:20.874478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 23:21:20.874483       1 cache.go:39] Caches are synced for autoregister controller
	I0827 23:21:20.906044       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 23:21:20.917302       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 23:21:20.917426       1 policy_source.go:224] refreshing policies
	I0827 23:21:20.959231       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 23:21:21.763104       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0827 23:21:22.662549       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 23:21:22.680518       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 23:21:22.746622       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 23:21:22.844177       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 23:21:22.866921       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0827 23:21:23.914544       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 23:21:24.518161       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4] <==
	
	
	==> kube-controller-manager [dbe20871a486b7aa046f6bc005a14c3b5eadb133da37926325bc9e03dbe5101e] <==
	I0827 23:21:24.167450       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0827 23:21:24.167483       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0827 23:21:24.167557       1 shared_informer.go:320] Caches are synced for HPA
	I0827 23:21:24.172504       1 shared_informer.go:320] Caches are synced for cronjob
	I0827 23:21:24.176764       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0827 23:21:24.176837       1 shared_informer.go:320] Caches are synced for node
	I0827 23:21:24.176901       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0827 23:21:24.176941       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0827 23:21:24.176963       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0827 23:21:24.177026       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0827 23:21:24.177099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-772694"
	I0827 23:21:24.182364       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0827 23:21:24.182590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="106.678µs"
	I0827 23:21:24.195828       1 shared_informer.go:320] Caches are synced for namespace
	I0827 23:21:24.209250       1 shared_informer.go:320] Caches are synced for service account
	I0827 23:21:24.263742       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0827 23:21:24.274766       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:21:24.278935       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0827 23:21:24.279021       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-772694"
	I0827 23:21:24.317789       1 shared_informer.go:320] Caches are synced for endpoint
	I0827 23:21:24.350566       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:21:24.375011       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 23:21:24.812695       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:21:24.812729       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 23:21:24.814205       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [19e66101ea208c709c09138ef09bbbcea3499d02ffdc8143fbcee43b26324e2b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 23:21:22.714178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 23:21:22.768862       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.83.89"]
	E0827 23:21:22.768947       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:21:23.149362       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 23:21:23.149440       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 23:21:23.149469       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:21:23.152280       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:21:23.152635       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:21:23.152647       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:21:23.155886       1 config.go:197] "Starting service config controller"
	I0827 23:21:23.155930       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:21:23.155950       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:21:23.155954       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:21:23.161405       1 config.go:326] "Starting node config controller"
	I0827 23:21:23.161430       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:21:23.256497       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 23:21:23.256625       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:21:23.261500       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23ed408e2e3bbe04d16181ee359ca8044dbe90c6aafcffa9416f90c6f3a37d9e] <==
	I0827 23:21:19.085097       1 serving.go:386] Generated self-signed cert in-memory
	I0827 23:21:20.885883       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:21:20.885912       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:21:20.889621       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0827 23:21:20.889717       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 23:21:20.889763       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0827 23:21:20.889846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:21:20.889873       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:21:20.889939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0827 23:21:20.889962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0827 23:21:20.889805       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 23:21:20.989942       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0827 23:21:20.989997       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:21:20.990070       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f] <==
	I0827 23:21:14.777904       1 serving.go:386] Generated self-signed cert in-memory
	W0827 23:21:15.057528       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.83.89:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.83.89:8443: connect: connection refused
	W0827 23:21:15.057615       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:21:15.057640       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:21:15.062026       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:21:15.062100       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0827 23:21:15.062134       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0827 23:21:15.063948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:21:15.063991       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0827 23:21:15.064018       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0827 23:21:15.064146       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0827 23:21:15.064263       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E0827 23:21:15.064408       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 27 23:21:17 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:17.833625    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b9094a91a5905084912764971d916bb6-etcd-certs\") pod \"etcd-kubernetes-upgrade-772694\" (UID: \"b9094a91a5905084912764971d916bb6\") " pod="kube-system/etcd-kubernetes-upgrade-772694"
	Aug 27 23:21:17 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:17.833640    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b9094a91a5905084912764971d916bb6-etcd-data\") pod \"etcd-kubernetes-upgrade-772694\" (UID: \"b9094a91a5905084912764971d916bb6\") " pod="kube-system/etcd-kubernetes-upgrade-772694"
	Aug 27 23:21:17 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:17.833656    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/606a40ac6da8a585c27f10c07a4c7004-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-772694\" (UID: \"606a40ac6da8a585c27f10c07a4c7004\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-772694"
	Aug 27 23:21:17 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:17.992177    2301 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-772694"
	Aug 27 23:21:17 kubernetes-upgrade-772694 kubelet[2301]: E0827 23:21:17.993158    2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.89:8443: connect: connection refused" node="kubernetes-upgrade-772694"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:18.050691    2301 scope.go:117] "RemoveContainer" containerID="7a0fb7f251a5fa0bce71f35b874c773d553c47a9b750c55ec672904746ec52a4"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:18.050995    2301 scope.go:117] "RemoveContainer" containerID="8828e381ec17808d3646e0682fbc5121a7dee13813ef7f1eea8609572ade0f4f"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:18.053978    2301 scope.go:117] "RemoveContainer" containerID="fb6498d4ede60dbace35bdf1062a6674ce8108a6d5019db556dc28df5bea2779"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:18.059696    2301 scope.go:117] "RemoveContainer" containerID="2b455dbee96d0c60a697a7289edd57a6bb21ee130351bf59d8f4da87a46e2b1f"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: E0827 23:21:18.233362    2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-772694?timeout=10s\": dial tcp 192.168.83.89:8443: connect: connection refused" interval="800ms"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:18.394755    2301 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-772694"
	Aug 27 23:21:18 kubernetes-upgrade-772694 kubelet[2301]: E0827 23:21:18.395596    2301 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.89:8443: connect: connection refused" node="kubernetes-upgrade-772694"
	Aug 27 23:21:19 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:19.197210    2301 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-772694"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.009975    2301 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-772694"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.010420    2301 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-772694"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.010509    2301 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.011564    2301 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.620290    2301 apiserver.go:52] "Watching apiserver"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.626567    2301 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.720987    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/288512ee-ecfa-425f-bf91-5535b5854a43-tmp\") pod \"storage-provisioner\" (UID: \"288512ee-ecfa-425f-bf91-5535b5854a43\") " pod="kube-system/storage-provisioner"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.721199    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrsl\" (UniqueName: \"kubernetes.io/projected/288512ee-ecfa-425f-bf91-5535b5854a43-kube-api-access-7xrsl\") pod \"storage-provisioner\" (UID: \"288512ee-ecfa-425f-bf91-5535b5854a43\") " pod="kube-system/storage-provisioner"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.721344    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h246n\" (UniqueName: \"kubernetes.io/projected/13bb9359-3abb-4f60-ac48-4dfebb11bbe2-kube-api-access-h246n\") pod \"kube-proxy-k7hcj\" (UID: \"13bb9359-3abb-4f60-ac48-4dfebb11bbe2\") " pod="kube-system/kube-proxy-k7hcj"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.721504    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13bb9359-3abb-4f60-ac48-4dfebb11bbe2-lib-modules\") pod \"kube-proxy-k7hcj\" (UID: \"13bb9359-3abb-4f60-ac48-4dfebb11bbe2\") " pod="kube-system/kube-proxy-k7hcj"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.721602    2301 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13bb9359-3abb-4f60-ac48-4dfebb11bbe2-xtables-lock\") pod \"kube-proxy-k7hcj\" (UID: \"13bb9359-3abb-4f60-ac48-4dfebb11bbe2\") " pod="kube-system/kube-proxy-k7hcj"
	Aug 27 23:21:21 kubernetes-upgrade-772694 kubelet[2301]: I0827 23:21:21.830856    2301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [a1f90738f4602ab867017271321278b40313aa8a965720213fa99a4d4c53e835] <==
	I0827 23:21:22.513527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:21:24.426926   63663 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19522-7571/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-772694 -n kubernetes-upgrade-772694
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-772694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-772694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-772694
--- FAIL: TestKubernetesUpgrade (330.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-677405 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-677405 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.972427845s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-677405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-677405" primary control-plane node in "pause-677405" cluster
	* Updating the running kvm2 "pause-677405" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-677405" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:17:38.334249   58472 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:17:38.334392   58472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:17:38.334402   58472 out.go:358] Setting ErrFile to fd 2...
	I0827 23:17:38.334408   58472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:17:38.334640   58472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:17:38.335279   58472 out.go:352] Setting JSON to false
	I0827 23:17:38.336507   58472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7205,"bootTime":1724793453,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:17:38.336589   58472 start.go:139] virtualization: kvm guest
	I0827 23:17:38.338864   58472 out.go:177] * [pause-677405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:17:38.340228   58472 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:17:38.340260   58472 notify.go:220] Checking for updates...
	I0827 23:17:38.342619   58472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:17:38.343815   58472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:17:38.345112   58472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:17:38.346246   58472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:17:38.347415   58472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:17:38.349408   58472 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:17:38.349922   58472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:17:38.349985   58472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:17:38.366162   58472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0827 23:17:38.366656   58472 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:17:38.367322   58472 main.go:141] libmachine: Using API Version  1
	I0827 23:17:38.367352   58472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:17:38.367690   58472 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:17:38.367875   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:38.368126   58472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:17:38.368488   58472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:17:38.368531   58472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:17:38.384013   58472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0827 23:17:38.384604   58472 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:17:38.385132   58472 main.go:141] libmachine: Using API Version  1
	I0827 23:17:38.385163   58472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:17:38.385530   58472 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:17:38.385725   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:38.423744   58472 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 23:17:38.425089   58472 start.go:297] selected driver: kvm2
	I0827 23:17:38.425107   58472 start.go:901] validating driver "kvm2" against &{Name:pause-677405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-677405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:17:38.425309   58472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:17:38.425758   58472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:17:38.425834   58472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:17:38.442820   58472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:17:38.443749   58472 cni.go:84] Creating CNI manager for ""
	I0827 23:17:38.443768   58472 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:17:38.443840   58472 start.go:340] cluster config:
	{Name:pause-677405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-677405 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:17:38.444016   58472 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:17:38.446376   58472 out.go:177] * Starting "pause-677405" primary control-plane node in "pause-677405" cluster
	I0827 23:17:38.447430   58472 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 23:17:38.447471   58472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 23:17:38.447479   58472 cache.go:56] Caching tarball of preloaded images
	I0827 23:17:38.447578   58472 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 23:17:38.447593   58472 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0827 23:17:38.447770   58472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/config.json ...
	I0827 23:17:38.448018   58472 start.go:360] acquireMachinesLock for pause-677405: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:17:45.360870   58472 start.go:364] duration metric: took 6.912804555s to acquireMachinesLock for "pause-677405"
	I0827 23:17:45.360937   58472 start.go:96] Skipping create...Using existing machine configuration
	I0827 23:17:45.360958   58472 fix.go:54] fixHost starting: 
	I0827 23:17:45.361342   58472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:17:45.361401   58472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:17:45.378694   58472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0827 23:17:45.379163   58472 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:17:45.379786   58472 main.go:141] libmachine: Using API Version  1
	I0827 23:17:45.379810   58472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:17:45.380137   58472 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:17:45.380315   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:45.380431   58472 main.go:141] libmachine: (pause-677405) Calling .GetState
	I0827 23:17:45.382076   58472 fix.go:112] recreateIfNeeded on pause-677405: state=Running err=<nil>
	W0827 23:17:45.382098   58472 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 23:17:45.383820   58472 out.go:177] * Updating the running kvm2 "pause-677405" VM ...
	I0827 23:17:45.384984   58472 machine.go:93] provisionDockerMachine start ...
	I0827 23:17:45.385004   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:45.385207   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:45.387835   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.388274   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.388302   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.388458   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:45.388636   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.388770   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.388884   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:45.389023   58472 main.go:141] libmachine: Using SSH client type: native
	I0827 23:17:45.389244   58472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.236 22 <nil> <nil>}
	I0827 23:17:45.389257   58472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:17:45.501623   58472 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-677405
	
	I0827 23:17:45.501654   58472 main.go:141] libmachine: (pause-677405) Calling .GetMachineName
	I0827 23:17:45.501906   58472 buildroot.go:166] provisioning hostname "pause-677405"
	I0827 23:17:45.501938   58472 main.go:141] libmachine: (pause-677405) Calling .GetMachineName
	I0827 23:17:45.502084   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:45.504946   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.505383   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.505408   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.505617   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:45.505844   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.506019   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.506220   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:45.506415   58472 main.go:141] libmachine: Using SSH client type: native
	I0827 23:17:45.506601   58472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.236 22 <nil> <nil>}
	I0827 23:17:45.506618   58472 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-677405 && echo "pause-677405" | sudo tee /etc/hostname
	I0827 23:17:45.630411   58472 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-677405
	
	I0827 23:17:45.630440   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:45.633271   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.633686   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.633709   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.633896   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:45.634115   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.634314   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.634471   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:45.634780   58472 main.go:141] libmachine: Using SSH client type: native
	I0827 23:17:45.635011   58472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.236 22 <nil> <nil>}
	I0827 23:17:45.635036   58472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-677405' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-677405/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-677405' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:17:45.745400   58472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:17:45.745428   58472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19522-7571/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-7571/.minikube}
	I0827 23:17:45.745468   58472 buildroot.go:174] setting up certificates
	I0827 23:17:45.745489   58472 provision.go:84] configureAuth start
	I0827 23:17:45.745504   58472 main.go:141] libmachine: (pause-677405) Calling .GetMachineName
	I0827 23:17:45.745788   58472 main.go:141] libmachine: (pause-677405) Calling .GetIP
	I0827 23:17:45.748801   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.749154   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.749193   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.749339   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:45.751672   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.752036   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.752061   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.752206   58472 provision.go:143] copyHostCerts
	I0827 23:17:45.752264   58472 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem, removing ...
	I0827 23:17:45.752285   58472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem
	I0827 23:17:45.752371   58472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/ca.pem (1082 bytes)
	I0827 23:17:45.752523   58472 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem, removing ...
	I0827 23:17:45.752535   58472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem
	I0827 23:17:45.752561   58472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/cert.pem (1123 bytes)
	I0827 23:17:45.752636   58472 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem, removing ...
	I0827 23:17:45.752644   58472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem
	I0827 23:17:45.752662   58472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-7571/.minikube/key.pem (1679 bytes)
	I0827 23:17:45.752725   58472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem org=jenkins.pause-677405 san=[127.0.0.1 192.168.61.236 localhost minikube pause-677405]
	I0827 23:17:45.853059   58472 provision.go:177] copyRemoteCerts
	I0827 23:17:45.853119   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:17:45.853144   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:45.856132   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.856584   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:45.856621   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:45.856831   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:45.857033   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:45.857183   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:45.857337   58472 sshutil.go:53] new ssh client: &{IP:192.168.61.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/pause-677405/id_rsa Username:docker}
	I0827 23:17:45.940657   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0827 23:17:45.965188   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 23:17:45.992067   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0827 23:17:46.017268   58472 provision.go:87] duration metric: took 271.762317ms to configureAuth
	I0827 23:17:46.017299   58472 buildroot.go:189] setting minikube options for container-runtime
	I0827 23:17:46.017928   58472 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:17:46.018037   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:46.021404   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:46.021729   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:46.021754   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:46.021966   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:46.022157   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:46.022335   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:46.022443   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:46.022588   58472 main.go:141] libmachine: Using SSH client type: native
	I0827 23:17:46.022745   58472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.236 22 <nil> <nil>}
	I0827 23:17:46.022760   58472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0827 23:17:51.555934   58472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0827 23:17:51.555961   58472 machine.go:96] duration metric: took 6.170961939s to provisionDockerMachine
	I0827 23:17:51.555976   58472 start.go:293] postStartSetup for "pause-677405" (driver="kvm2")
	I0827 23:17:51.555990   58472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:17:51.556014   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:51.556353   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:17:51.556383   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:51.559247   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.559714   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:51.559743   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.559974   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:51.560183   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:51.560350   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:51.560512   58472 sshutil.go:53] new ssh client: &{IP:192.168.61.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/pause-677405/id_rsa Username:docker}
	I0827 23:17:51.647289   58472 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:17:51.652678   58472 info.go:137] Remote host: Buildroot 2023.02.9
	I0827 23:17:51.652706   58472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/addons for local assets ...
	I0827 23:17:51.652786   58472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-7571/.minikube/files for local assets ...
	I0827 23:17:51.652893   58472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem -> 147652.pem in /etc/ssl/certs
	I0827 23:17:51.653018   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:17:51.665183   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:17:51.693467   58472 start.go:296] duration metric: took 137.476214ms for postStartSetup
	I0827 23:17:51.693508   58472 fix.go:56] duration metric: took 6.332556417s for fixHost
	I0827 23:17:51.693574   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:51.697147   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.697646   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:51.697685   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.697872   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:51.698088   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:51.698266   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:51.698430   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:51.698652   58472 main.go:141] libmachine: Using SSH client type: native
	I0827 23:17:51.698872   58472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.236 22 <nil> <nil>}
	I0827 23:17:51.698888   58472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0827 23:17:51.813291   58472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724800671.790280490
	
	I0827 23:17:51.813320   58472 fix.go:216] guest clock: 1724800671.790280490
	I0827 23:17:51.813350   58472 fix.go:229] Guest: 2024-08-27 23:17:51.79028049 +0000 UTC Remote: 2024-08-27 23:17:51.693513493 +0000 UTC m=+13.406124856 (delta=96.766997ms)
	I0827 23:17:51.813399   58472 fix.go:200] guest clock delta is within tolerance: 96.766997ms
	I0827 23:17:51.813408   58472 start.go:83] releasing machines lock for "pause-677405", held for 6.452511934s
	I0827 23:17:51.813438   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:51.813728   58472 main.go:141] libmachine: (pause-677405) Calling .GetIP
	I0827 23:17:51.817099   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.817536   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:51.817562   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.817772   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:51.818259   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:51.818470   58472 main.go:141] libmachine: (pause-677405) Calling .DriverName
	I0827 23:17:51.818573   58472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:17:51.818629   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:51.818683   58472 ssh_runner.go:195] Run: cat /version.json
	I0827 23:17:51.818709   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHHostname
	I0827 23:17:51.821542   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.821623   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.821987   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:51.822013   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.822120   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:51.822146   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:51.822323   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:51.822385   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHPort
	I0827 23:17:51.822481   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:51.822520   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHKeyPath
	I0827 23:17:51.822601   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:51.822655   58472 main.go:141] libmachine: (pause-677405) Calling .GetSSHUsername
	I0827 23:17:51.822728   58472 sshutil.go:53] new ssh client: &{IP:192.168.61.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/pause-677405/id_rsa Username:docker}
	I0827 23:17:51.822770   58472 sshutil.go:53] new ssh client: &{IP:192.168.61.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/pause-677405/id_rsa Username:docker}
	I0827 23:17:51.910317   58472 ssh_runner.go:195] Run: systemctl --version
	I0827 23:17:51.942670   58472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0827 23:17:52.112025   58472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0827 23:17:52.122988   58472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0827 23:17:52.123064   58472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:17:52.133730   58472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 23:17:52.133756   58472 start.go:495] detecting cgroup driver to use...
	I0827 23:17:52.133826   58472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0827 23:17:52.152710   58472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0827 23:17:52.172524   58472 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:17:52.172589   58472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:17:52.187455   58472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:17:52.202134   58472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:17:52.333775   58472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:17:52.458477   58472 docker.go:233] disabling docker service ...
	I0827 23:17:52.458547   58472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:17:52.478790   58472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:17:52.492968   58472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:17:52.630754   58472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:17:52.760043   58472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:17:52.773292   58472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:17:52.791475   58472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0827 23:17:52.791530   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.802683   58472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0827 23:17:52.802739   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.812534   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.823167   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.834641   58472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:17:52.845069   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.856514   58472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.867738   58472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0827 23:17:52.878229   58472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:17:52.887555   58472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:17:52.896675   58472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:17:53.037234   58472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0827 23:17:54.664447   58472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.627174985s)
	I0827 23:17:54.664505   58472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0827 23:17:54.664566   58472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0827 23:17:54.675938   58472 start.go:563] Will wait 60s for crictl version
	I0827 23:17:54.676026   58472 ssh_runner.go:195] Run: which crictl
	I0827 23:17:54.679844   58472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:17:54.836041   58472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0827 23:17:54.836132   58472 ssh_runner.go:195] Run: crio --version
	I0827 23:17:54.969369   58472 ssh_runner.go:195] Run: crio --version
	I0827 23:17:55.153183   58472 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0827 23:17:55.154378   58472 main.go:141] libmachine: (pause-677405) Calling .GetIP
	I0827 23:17:55.158127   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:55.158643   58472 main.go:141] libmachine: (pause-677405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:67:1b", ip: ""} in network mk-pause-677405: {Iface:virbr3 ExpiryTime:2024-08-28 00:17:00 +0000 UTC Type:0 Mac:52:54:00:f4:67:1b Iaid: IPaddr:192.168.61.236 Prefix:24 Hostname:pause-677405 Clientid:01:52:54:00:f4:67:1b}
	I0827 23:17:55.158676   58472 main.go:141] libmachine: (pause-677405) DBG | domain pause-677405 has defined IP address 192.168.61.236 and MAC address 52:54:00:f4:67:1b in network mk-pause-677405
	I0827 23:17:55.158960   58472 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0827 23:17:55.195860   58472 kubeadm.go:883] updating cluster {Name:pause-677405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-677405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:17:55.196061   58472 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 23:17:55.196130   58472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:17:55.414954   58472 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 23:17:55.414979   58472 crio.go:433] Images already preloaded, skipping extraction
	I0827 23:17:55.415040   58472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:17:55.495710   58472 crio.go:514] all images are preloaded for cri-o runtime.
	I0827 23:17:55.495736   58472 cache_images.go:84] Images are preloaded, skipping loading
	I0827 23:17:55.495745   58472 kubeadm.go:934] updating node { 192.168.61.236 8443 v1.31.0 crio true true} ...
	I0827 23:17:55.495886   58472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-677405 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-677405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:17:55.495971   58472 ssh_runner.go:195] Run: crio config
	I0827 23:17:55.607273   58472 cni.go:84] Creating CNI manager for ""
	I0827 23:17:55.607296   58472 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:17:55.607307   58472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:17:55.607338   58472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.236 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-677405 NodeName:pause-677405 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 23:17:55.607513   58472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-677405"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:17:55.607579   58472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 23:17:55.620439   58472 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:17:55.620524   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:17:55.632339   58472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0827 23:17:55.654360   58472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:17:55.672313   58472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0827 23:17:55.692812   58472 ssh_runner.go:195] Run: grep 192.168.61.236	control-plane.minikube.internal$ /etc/hosts
	I0827 23:17:55.696929   58472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:17:55.933670   58472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:17:55.958215   58472 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405 for IP: 192.168.61.236
	I0827 23:17:55.958243   58472 certs.go:194] generating shared ca certs ...
	I0827 23:17:55.958263   58472 certs.go:226] acquiring lock for ca certs: {Name:mk0d5129069055cf3f4fbd692fa5406a22d754ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:17:55.958431   58472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key
	I0827 23:17:55.958481   58472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key
	I0827 23:17:55.958493   58472 certs.go:256] generating profile certs ...
	I0827 23:17:55.958606   58472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/client.key
	I0827 23:17:55.958684   58472 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/apiserver.key.f773f523
	I0827 23:17:55.958744   58472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/proxy-client.key
	I0827 23:17:55.958883   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem (1338 bytes)
	W0827 23:17:55.958922   58472 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765_empty.pem, impossibly tiny 0 bytes
	I0827 23:17:55.958935   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:17:55.958966   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem (1082 bytes)
	I0827 23:17:55.958998   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:17:55.959028   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/certs/key.pem (1679 bytes)
	I0827 23:17:55.959078   58472 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem (1708 bytes)
	I0827 23:17:55.960026   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:17:56.017319   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 23:17:56.046625   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:17:56.083948   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:17:56.139402   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0827 23:17:56.170218   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 23:17:56.195921   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:17:56.227555   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/pause-677405/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:17:56.255789   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/certs/14765.pem --> /usr/share/ca-certificates/14765.pem (1338 bytes)
	I0827 23:17:56.280912   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/ssl/certs/147652.pem --> /usr/share/ca-certificates/147652.pem (1708 bytes)
	I0827 23:17:56.311562   58472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-7571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:17:56.340411   58472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:17:56.359806   58472 ssh_runner.go:195] Run: openssl version
	I0827 23:17:56.365270   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147652.pem && ln -fs /usr/share/ca-certificates/147652.pem /etc/ssl/certs/147652.pem"
	I0827 23:17:56.378619   58472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147652.pem
	I0827 23:17:56.383293   58472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 22:18 /usr/share/ca-certificates/147652.pem
	I0827 23:17:56.383355   58472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147652.pem
	I0827 23:17:56.390144   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/147652.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:17:56.404140   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:17:56.415638   58472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:17:56.420337   58472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:17:56.420406   58472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:17:56.430223   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:17:56.446655   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14765.pem && ln -fs /usr/share/ca-certificates/14765.pem /etc/ssl/certs/14765.pem"
	I0827 23:17:56.463248   58472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14765.pem
	I0827 23:17:56.468834   58472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 22:18 /usr/share/ca-certificates/14765.pem
	I0827 23:17:56.468898   58472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14765.pem
	I0827 23:17:56.475393   58472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14765.pem /etc/ssl/certs/51391683.0"
	I0827 23:17:56.485805   58472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:17:56.490252   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 23:17:56.497189   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 23:17:56.502734   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 23:17:56.509496   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 23:17:56.515313   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 23:17:56.520997   58472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 23:17:56.526622   58472 kubeadm.go:392] StartCluster: {Name:pause-677405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-677405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:17:56.526776   58472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0827 23:17:56.526827   58472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:17:56.572222   58472 cri.go:89] found id: "1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579"
	I0827 23:17:56.572252   58472 cri.go:89] found id: "a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928"
	I0827 23:17:56.572258   58472 cri.go:89] found id: "f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7"
	I0827 23:17:56.572263   58472 cri.go:89] found id: "cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39"
	I0827 23:17:56.572266   58472 cri.go:89] found id: "8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a"
	I0827 23:17:56.572272   58472 cri.go:89] found id: "bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa"
	I0827 23:17:56.572277   58472 cri.go:89] found id: "8fffa7a101786c774c0ded05c859a5ac6a4359bb12997a5a961d5efe7e8b918c"
	I0827 23:17:56.572282   58472 cri.go:89] found id: "ad793dd1866cf7ec0f6ef8b69807c928b04230a5271ee828824104e0077675d1"
	I0827 23:17:56.572286   58472 cri.go:89] found id: "aafb76c05cb4f5384695c2f00137513b1ffeaea8d8c0385a921e77d14ca030e6"
	I0827 23:17:56.572293   58472 cri.go:89] found id: "e13ed7b8eb0402b40b21e6bf8cd6c248d7f78592b37b7a0beedb0e31f3953ecf"
	I0827 23:17:56.572298   58472 cri.go:89] found id: "a28185b67ad66214ac8871315c2cf6dc7c9b9ba255c0d5235ebf124b8b8311b7"
	I0827 23:17:56.572302   58472 cri.go:89] found id: "2df5c0802fdde37cf70f7964ef60cf96b2e90b16b2d95e3b05d3fedbf8d30345"
	I0827 23:17:56.572306   58472 cri.go:89] found id: ""
	I0827 23:17:56.572360   58472 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-677405 -n pause-677405
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-677405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-677405 logs -n 25: (1.368482088s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo docker                         | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo find                           | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo crio                           | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-409668                                     | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	| start   | -p old-k8s-version-686432                            | old-k8s-version-686432 | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:18:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:18:24.962537   61383 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:18:24.963009   61383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:18:24.963027   61383 out.go:358] Setting ErrFile to fd 2...
	I0827 23:18:24.963035   61383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:18:24.963480   61383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:18:24.964565   61383 out.go:352] Setting JSON to false
	I0827 23:18:24.965548   61383 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7252,"bootTime":1724793453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:18:24.965615   61383 start.go:139] virtualization: kvm guest
	I0827 23:18:24.967375   61383 out.go:177] * [old-k8s-version-686432] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:18:24.968869   61383 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:18:24.968876   61383 notify.go:220] Checking for updates...
	I0827 23:18:24.971021   61383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:18:24.972176   61383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:18:24.973419   61383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:24.974596   61383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:18:24.975887   61383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:18:24.977679   61383 config.go:182] Loaded profile config "cert-expiration-649861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:24.977820   61383 config.go:182] Loaded profile config "kubernetes-upgrade-772694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0827 23:18:24.978022   61383 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:24.978145   61383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:18:25.014657   61383 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 23:18:25.015923   61383 start.go:297] selected driver: kvm2
	I0827 23:18:25.015947   61383 start.go:901] validating driver "kvm2" against <nil>
	I0827 23:18:25.015958   61383 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:18:25.016744   61383 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:18:25.016834   61383 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:18:25.033267   61383 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:18:25.033324   61383 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:18:25.033529   61383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:18:25.033598   61383 cni.go:84] Creating CNI manager for ""
	I0827 23:18:25.033610   61383 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:18:25.033619   61383 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 23:18:25.033669   61383 start.go:340] cluster config:
	{Name:old-k8s-version-686432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:18:25.033780   61383 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:18:25.035405   61383 out.go:177] * Starting "old-k8s-version-686432" primary control-plane node in "old-k8s-version-686432" cluster
	I0827 23:18:25.036531   61383 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 23:18:25.036572   61383 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0827 23:18:25.036583   61383 cache.go:56] Caching tarball of preloaded images
	I0827 23:18:25.036674   61383 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 23:18:25.036688   61383 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0827 23:18:25.036806   61383 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/old-k8s-version-686432/config.json ...
	I0827 23:18:25.036828   61383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/old-k8s-version-686432/config.json: {Name:mk0e7629933894ffa318f693efc58312bc76bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:25.036992   61383 start.go:360] acquireMachinesLock for old-k8s-version-686432: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:18:25.037039   61383 start.go:364] duration metric: took 27.546µs to acquireMachinesLock for "old-k8s-version-686432"
	I0827 23:18:25.037063   61383 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-686432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:18:25.037136   61383 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 23:18:23.603434   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 23:18:23.616086   58472 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 23:18:23.635496   58472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:18:23.635591   58472 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 23:18:23.635617   58472 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 23:18:23.649209   58472 system_pods.go:59] 6 kube-system pods found
	I0827 23:18:23.649237   58472 system_pods.go:61] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0827 23:18:23.649244   58472 system_pods.go:61] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0827 23:18:23.649250   58472 system_pods.go:61] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0827 23:18:23.649262   58472 system_pods.go:61] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0827 23:18:23.649269   58472 system_pods.go:61] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0827 23:18:23.649274   58472 system_pods.go:61] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0827 23:18:23.649282   58472 system_pods.go:74] duration metric: took 13.763924ms to wait for pod list to return data ...
	I0827 23:18:23.649288   58472 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:18:23.654810   58472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:18:23.654840   58472 node_conditions.go:123] node cpu capacity is 2
	I0827 23:18:23.654853   58472 node_conditions.go:105] duration metric: took 5.560712ms to run NodePressure ...
	I0827 23:18:23.654872   58472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:23.961941   58472 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0827 23:18:23.968001   58472 kubeadm.go:739] kubelet initialised
	I0827 23:18:23.968026   58472 kubeadm.go:740] duration metric: took 6.060012ms waiting for restarted kubelet to initialise ...
	I0827 23:18:23.968033   58472 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:23.972798   58472 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:25.980592   58472 pod_ready.go:103] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:26.981229   58472 pod_ready.go:93] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:26.981254   58472 pod_ready.go:82] duration metric: took 3.00843116s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:26.981265   58472 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:28.703524   57186 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0827 23:18:28.703638   57186 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0827 23:18:28.705497   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:28.705571   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:28.705694   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:28.705809   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:28.705934   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:28.705999   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:28.707739   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:28.707823   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:28.707896   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:28.707970   57186 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 23:18:28.708018   57186 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 23:18:28.708093   57186 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 23:18:28.708141   57186 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 23:18:28.708228   57186 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 23:18:28.708405   57186 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708498   57186 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 23:18:28.708676   57186 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708801   57186 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 23:18:28.708907   57186 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 23:18:28.708973   57186 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 23:18:28.709048   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:28.709116   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:28.709179   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:28.709247   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:28.709319   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:28.709449   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:28.709579   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:28.709632   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:28.709707   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:25.038555   61383 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 23:18:25.038685   61383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:25.038720   61383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:25.053716   61383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0827 23:18:25.054219   61383 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:18:25.054815   61383 main.go:141] libmachine: Using API Version  1
	I0827 23:18:25.054857   61383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:18:25.055278   61383 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:18:25.055516   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .GetMachineName
	I0827 23:18:25.055690   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .DriverName
	I0827 23:18:25.055851   61383 start.go:159] libmachine.API.Create for "old-k8s-version-686432" (driver="kvm2")
	I0827 23:18:25.055883   61383 client.go:168] LocalClient.Create starting
	I0827 23:18:25.055921   61383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 23:18:25.055963   61383 main.go:141] libmachine: Decoding PEM data...
	I0827 23:18:25.055987   61383 main.go:141] libmachine: Parsing certificate...
	I0827 23:18:25.056063   61383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 23:18:25.056083   61383 main.go:141] libmachine: Decoding PEM data...
	I0827 23:18:25.056094   61383 main.go:141] libmachine: Parsing certificate...
	I0827 23:18:25.056107   61383 main.go:141] libmachine: Running pre-create checks...
	I0827 23:18:25.056121   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .PreCreateCheck
	I0827 23:18:25.056596   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .GetConfigRaw
	I0827 23:18:25.057053   61383 main.go:141] libmachine: Creating machine...
	I0827 23:18:25.057074   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .Create
	I0827 23:18:25.057240   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating KVM machine...
	I0827 23:18:25.058600   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | found existing default KVM network
	I0827 23:18:25.060077   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.059942   61405 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0827 23:18:25.060100   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | created network xml: 
	I0827 23:18:25.060113   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | <network>
	I0827 23:18:25.060122   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <name>mk-old-k8s-version-686432</name>
	I0827 23:18:25.060131   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <dns enable='no'/>
	I0827 23:18:25.060141   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   
	I0827 23:18:25.060151   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0827 23:18:25.060165   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |     <dhcp>
	I0827 23:18:25.060206   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0827 23:18:25.060232   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |     </dhcp>
	I0827 23:18:25.060251   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   </ip>
	I0827 23:18:25.060260   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   
	I0827 23:18:25.060266   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | </network>
	I0827 23:18:25.060271   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | 
	I0827 23:18:25.065399   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | trying to create private KVM network mk-old-k8s-version-686432 192.168.39.0/24...
	I0827 23:18:25.136738   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | private KVM network mk-old-k8s-version-686432 192.168.39.0/24 created
	I0827 23:18:25.136786   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.136677   61405 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:25.136800   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 ...
	I0827 23:18:25.136847   61383 main.go:141] libmachine: (old-k8s-version-686432) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 23:18:25.136882   61383 main.go:141] libmachine: (old-k8s-version-686432) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 23:18:25.418963   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.418831   61405 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/id_rsa...
	I0827 23:18:25.626647   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.626487   61405 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/old-k8s-version-686432.rawdisk...
	I0827 23:18:25.626685   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Writing magic tar header
	I0827 23:18:25.626706   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Writing SSH key tar header
	I0827 23:18:25.626718   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.626654   61405 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 ...
	I0827 23:18:25.626808   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432
	I0827 23:18:25.626834   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 23:18:25.626849   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 (perms=drwx------)
	I0827 23:18:25.626886   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:25.626912   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 23:18:25.626930   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 23:18:25.626947   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 23:18:25.626957   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins
	I0827 23:18:25.626987   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home
	I0827 23:18:25.627004   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Skipping /home - not owner
	I0827 23:18:25.627017   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 23:18:25.627032   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 23:18:25.627045   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 23:18:25.627085   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 23:18:25.627104   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating domain...
	I0827 23:18:25.628237   61383 main.go:141] libmachine: (old-k8s-version-686432) define libvirt domain using xml: 
	I0827 23:18:25.628262   61383 main.go:141] libmachine: (old-k8s-version-686432) <domain type='kvm'>
	I0827 23:18:25.628284   61383 main.go:141] libmachine: (old-k8s-version-686432)   <name>old-k8s-version-686432</name>
	I0827 23:18:25.628301   61383 main.go:141] libmachine: (old-k8s-version-686432)   <memory unit='MiB'>2200</memory>
	I0827 23:18:25.628310   61383 main.go:141] libmachine: (old-k8s-version-686432)   <vcpu>2</vcpu>
	I0827 23:18:25.628320   61383 main.go:141] libmachine: (old-k8s-version-686432)   <features>
	I0827 23:18:25.628340   61383 main.go:141] libmachine: (old-k8s-version-686432)     <acpi/>
	I0827 23:18:25.628351   61383 main.go:141] libmachine: (old-k8s-version-686432)     <apic/>
	I0827 23:18:25.628369   61383 main.go:141] libmachine: (old-k8s-version-686432)     <pae/>
	I0827 23:18:25.628385   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.628426   61383 main.go:141] libmachine: (old-k8s-version-686432)   </features>
	I0827 23:18:25.628450   61383 main.go:141] libmachine: (old-k8s-version-686432)   <cpu mode='host-passthrough'>
	I0827 23:18:25.628479   61383 main.go:141] libmachine: (old-k8s-version-686432)   
	I0827 23:18:25.628493   61383 main.go:141] libmachine: (old-k8s-version-686432)   </cpu>
	I0827 23:18:25.628506   61383 main.go:141] libmachine: (old-k8s-version-686432)   <os>
	I0827 23:18:25.628516   61383 main.go:141] libmachine: (old-k8s-version-686432)     <type>hvm</type>
	I0827 23:18:25.628527   61383 main.go:141] libmachine: (old-k8s-version-686432)     <boot dev='cdrom'/>
	I0827 23:18:25.628535   61383 main.go:141] libmachine: (old-k8s-version-686432)     <boot dev='hd'/>
	I0827 23:18:25.628545   61383 main.go:141] libmachine: (old-k8s-version-686432)     <bootmenu enable='no'/>
	I0827 23:18:25.628555   61383 main.go:141] libmachine: (old-k8s-version-686432)   </os>
	I0827 23:18:25.628578   61383 main.go:141] libmachine: (old-k8s-version-686432)   <devices>
	I0827 23:18:25.628597   61383 main.go:141] libmachine: (old-k8s-version-686432)     <disk type='file' device='cdrom'>
	I0827 23:18:25.628648   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/boot2docker.iso'/>
	I0827 23:18:25.628675   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target dev='hdc' bus='scsi'/>
	I0827 23:18:25.628687   61383 main.go:141] libmachine: (old-k8s-version-686432)       <readonly/>
	I0827 23:18:25.628699   61383 main.go:141] libmachine: (old-k8s-version-686432)     </disk>
	I0827 23:18:25.628715   61383 main.go:141] libmachine: (old-k8s-version-686432)     <disk type='file' device='disk'>
	I0827 23:18:25.628728   61383 main.go:141] libmachine: (old-k8s-version-686432)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 23:18:25.628744   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/old-k8s-version-686432.rawdisk'/>
	I0827 23:18:25.628755   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target dev='hda' bus='virtio'/>
	I0827 23:18:25.628766   61383 main.go:141] libmachine: (old-k8s-version-686432)     </disk>
	I0827 23:18:25.628780   61383 main.go:141] libmachine: (old-k8s-version-686432)     <interface type='network'>
	I0827 23:18:25.628792   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source network='mk-old-k8s-version-686432'/>
	I0827 23:18:25.628804   61383 main.go:141] libmachine: (old-k8s-version-686432)       <model type='virtio'/>
	I0827 23:18:25.628817   61383 main.go:141] libmachine: (old-k8s-version-686432)     </interface>
	I0827 23:18:25.628829   61383 main.go:141] libmachine: (old-k8s-version-686432)     <interface type='network'>
	I0827 23:18:25.628841   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source network='default'/>
	I0827 23:18:25.628857   61383 main.go:141] libmachine: (old-k8s-version-686432)       <model type='virtio'/>
	I0827 23:18:25.628869   61383 main.go:141] libmachine: (old-k8s-version-686432)     </interface>
	I0827 23:18:25.628878   61383 main.go:141] libmachine: (old-k8s-version-686432)     <serial type='pty'>
	I0827 23:18:25.628889   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target port='0'/>
	I0827 23:18:25.628898   61383 main.go:141] libmachine: (old-k8s-version-686432)     </serial>
	I0827 23:18:25.628907   61383 main.go:141] libmachine: (old-k8s-version-686432)     <console type='pty'>
	I0827 23:18:25.628917   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target type='serial' port='0'/>
	I0827 23:18:25.628938   61383 main.go:141] libmachine: (old-k8s-version-686432)     </console>
	I0827 23:18:25.628960   61383 main.go:141] libmachine: (old-k8s-version-686432)     <rng model='virtio'>
	I0827 23:18:25.628986   61383 main.go:141] libmachine: (old-k8s-version-686432)       <backend model='random'>/dev/random</backend>
	I0827 23:18:25.629005   61383 main.go:141] libmachine: (old-k8s-version-686432)     </rng>
	I0827 23:18:25.629016   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.629026   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.629035   61383 main.go:141] libmachine: (old-k8s-version-686432)   </devices>
	I0827 23:18:25.629046   61383 main.go:141] libmachine: (old-k8s-version-686432) </domain>
	I0827 23:18:25.629058   61383 main.go:141] libmachine: (old-k8s-version-686432) 
	I0827 23:18:25.632695   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:ce:a1:37 in network default
	I0827 23:18:25.633520   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring networks are active...
	I0827 23:18:25.633543   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:25.634386   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring network default is active
	I0827 23:18:25.634881   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring network mk-old-k8s-version-686432 is active
	I0827 23:18:25.635623   61383 main.go:141] libmachine: (old-k8s-version-686432) Getting domain xml...
	I0827 23:18:25.636674   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating domain...
	I0827 23:18:26.871714   61383 main.go:141] libmachine: (old-k8s-version-686432) Waiting to get IP...
	I0827 23:18:26.872769   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:26.873346   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:26.873375   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:26.873313   61405 retry.go:31] will retry after 243.262082ms: waiting for machine to come up
	I0827 23:18:27.118726   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.119381   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.119410   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.119334   61405 retry.go:31] will retry after 267.892861ms: waiting for machine to come up
	I0827 23:18:27.388778   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.389491   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.389519   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.389434   61405 retry.go:31] will retry after 383.832728ms: waiting for machine to come up
	I0827 23:18:27.774616   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.775092   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.775122   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.775054   61405 retry.go:31] will retry after 438.087912ms: waiting for machine to come up
	I0827 23:18:28.214564   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:28.215161   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:28.215200   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:28.215076   61405 retry.go:31] will retry after 486.657449ms: waiting for machine to come up
	I0827 23:18:28.703942   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:28.704484   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:28.704517   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:28.704410   61405 retry.go:31] will retry after 871.633482ms: waiting for machine to come up
	I0827 23:18:29.577366   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:29.577864   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:29.577904   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:29.577825   61405 retry.go:31] will retry after 806.704378ms: waiting for machine to come up
	I0827 23:18:28.711322   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:28.711397   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:28.711465   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:28.711523   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:28.711593   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:28.711779   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:18:28.711847   57186 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0827 23:18:28.711926   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712158   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712221   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712370   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712442   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712643   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712710   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712912   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712987   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.713213   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.713256   57186 kubeadm.go:310] 
	I0827 23:18:28.713308   57186 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0827 23:18:28.713364   57186 kubeadm.go:310] 		timed out waiting for the condition
	I0827 23:18:28.713377   57186 kubeadm.go:310] 
	I0827 23:18:28.713433   57186 kubeadm.go:310] 	This error is likely caused by:
	I0827 23:18:28.713494   57186 kubeadm.go:310] 		- The kubelet is not running
	I0827 23:18:28.713606   57186 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0827 23:18:28.713616   57186 kubeadm.go:310] 
	I0827 23:18:28.713724   57186 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0827 23:18:28.713768   57186 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0827 23:18:28.713821   57186 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0827 23:18:28.713832   57186 kubeadm.go:310] 
	I0827 23:18:28.714013   57186 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0827 23:18:28.714104   57186 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0827 23:18:28.714126   57186 kubeadm.go:310] 
	I0827 23:18:28.714244   57186 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0827 23:18:28.714343   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0827 23:18:28.714430   57186 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0827 23:18:28.714511   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0827 23:18:28.714535   57186 kubeadm.go:310] 
	W0827 23:18:28.714652   57186 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0827 23:18:28.714696   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0827 23:18:29.898172   57186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.183448402s)
	I0827 23:18:29.898262   57186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:18:29.912178   57186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:18:29.921912   57186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:18:29.921942   57186 kubeadm.go:157] found existing configuration files:
	
	I0827 23:18:29.921995   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:18:29.931061   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:18:29.931129   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:18:29.940344   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:18:29.949652   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:18:29.949722   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:18:29.960201   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.969231   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:18:29.969282   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.978533   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:18:29.988747   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:18:29.988810   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:18:29.999648   57186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 23:18:30.063431   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:30.063498   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:30.207367   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:30.207495   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:30.207594   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:30.407332   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:30.409199   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:30.409316   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:30.409433   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:30.409544   57186 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 23:18:30.409633   57186 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 23:18:30.409829   57186 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 23:18:30.409940   57186 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 23:18:30.410064   57186 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 23:18:30.410165   57186 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 23:18:30.410233   57186 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 23:18:30.410330   57186 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 23:18:30.410404   57186 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 23:18:30.410512   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:30.487640   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:30.697348   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:30.969478   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:31.147504   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:31.166119   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:31.167645   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:31.167719   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:31.310340   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:31.312219   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:31.312363   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:31.317236   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:31.318847   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:31.319995   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:31.322520   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:18:28.987395   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:30.988548   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:30.386569   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:30.387080   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:30.387109   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:30.387027   61405 retry.go:31] will retry after 1.235203034s: waiting for machine to come up
	I0827 23:18:31.623266   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:31.623783   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:31.623810   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:31.623742   61405 retry.go:31] will retry after 1.565691941s: waiting for machine to come up
	I0827 23:18:33.190759   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:33.191298   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:33.191322   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:33.191254   61405 retry.go:31] will retry after 1.717801152s: waiting for machine to come up
	I0827 23:18:34.911109   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:34.911682   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:34.911719   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:34.911585   61405 retry.go:31] will retry after 2.83421675s: waiting for machine to come up
	I0827 23:18:33.488446   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:35.987291   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:36.987883   58472 pod_ready.go:93] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:36.987913   58472 pod_ready.go:82] duration metric: took 10.006638814s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:36.987928   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:38.995520   58472 pod_ready.go:93] pod "kube-apiserver-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:38.995546   58472 pod_ready.go:82] duration metric: took 2.007609842s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:38.995556   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.001525   58472 pod_ready.go:93] pod "kube-controller-manager-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.001553   58472 pod_ready.go:82] duration metric: took 5.989835ms for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.001565   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.007361   58472 pod_ready.go:93] pod "kube-proxy-8zvr2" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.007387   58472 pod_ready.go:82] duration metric: took 5.814948ms for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.007399   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.012508   58472 pod_ready.go:93] pod "kube-scheduler-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.012535   58472 pod_ready.go:82] duration metric: took 5.126848ms for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.012548   58472 pod_ready.go:39] duration metric: took 15.044505134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:39.012567   58472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 23:18:39.025655   58472 ops.go:34] apiserver oom_adj: -16
	I0827 23:18:39.025679   58472 kubeadm.go:597] duration metric: took 42.387767372s to restartPrimaryControlPlane
	I0827 23:18:39.025689   58472 kubeadm.go:394] duration metric: took 42.499074194s to StartCluster
	I0827 23:18:39.025709   58472 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:39.025795   58472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:18:39.026582   58472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:39.026837   58472 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:18:39.026884   58472 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:18:39.027066   58472 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:39.028278   58472 out.go:177] * Verifying Kubernetes components...
	I0827 23:18:39.028293   58472 out.go:177] * Enabled addons: 
	I0827 23:18:37.749556   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:37.750082   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:37.750112   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:37.750034   61405 retry.go:31] will retry after 3.605421651s: waiting for machine to come up
	I0827 23:18:37.424633   59072 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c 4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7 2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8 79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815 2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f 1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225 c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb ff6fba7d4cd567c87fbf4331c6d0410ff65f9c87af20b9d26bc62bd65968fbca 4c7a5b5e55b4cadc68409de4b4cd00a9aa774136556ebc3b590d3aea717c2651 2059be0e6d199f8a32bd20d4a25bbbe67add567daf6e562d4f7f65ecdca02ecb c4fa73f84baac8efe9498a6678f09065171ec0ee25d26c909cff376965a4cab6 fdf87ebc8784d2819f3ad07885bdbb04231879c111b7bc63a8f106dd24add9bb: (20.468100508s)
	W0827 23:18:37.424709   59072 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c 4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7 2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8 79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815 2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f 1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225 c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb ff6fba7d4cd567c87fbf4331c6d0410ff65f9c87af20b9d26bc62bd65968fbca 4c7a5b5e55b4cadc68409de4b4cd00a9aa774136556ebc3b590d3aea717c2651 2059be0e6d199f8a32bd20d4a25bbbe67add567daf6e562d4f7f65ecdca02ecb c4fa73f84baac8efe9498a6678f09065171ec0ee25d26c909cff376965a4cab6 fdf87ebc8784d2819f3ad07885bdbb04231879c111b7bc63a8f106dd24add9bb: Proce
ss exited with status 1
	stdout:
	855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c
	4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7
	2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef
	a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8
	79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815
	2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f
	1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225
	
	stderr:
	E0827 23:18:37.406258    2982 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": container with ID starting with c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb not found: ID does not exist" containerID="c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb"
	time="2024-08-27T23:18:37Z" level=fatal msg="stopping the container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": rpc error: code = NotFound desc = could not find container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": container with ID starting with c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb not found: ID does not exist"
	I0827 23:18:37.424763   59072 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 23:18:37.460670   59072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:18:37.471148   59072 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 27 23:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 27 23:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 27 23:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug 27 23:14 /etc/kubernetes/scheduler.conf
	
	I0827 23:18:37.471193   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:18:37.480301   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:18:37.488878   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:18:37.497372   59072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:18:37.497442   59072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:18:37.506250   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:18:37.515416   59072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:18:37.515459   59072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:18:37.524167   59072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 23:18:37.533584   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:37.590690   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.477098   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.709843   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.773173   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.848796   59072 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:18:38.848868   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.349674   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.849020   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.865060   59072 api_server.go:72] duration metric: took 1.01626956s to wait for apiserver process to appear ...
	I0827 23:18:39.865075   59072 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:18:39.865093   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:39.029344   58472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:18:39.029339   58472 addons.go:510] duration metric: took 2.444022ms for enable addons: enabled=[]
	I0827 23:18:39.227365   58472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:18:39.242944   58472 node_ready.go:35] waiting up to 6m0s for node "pause-677405" to be "Ready" ...
	I0827 23:18:39.245880   58472 node_ready.go:49] node "pause-677405" has status "Ready":"True"
	I0827 23:18:39.245909   58472 node_ready.go:38] duration metric: took 2.925005ms for node "pause-677405" to be "Ready" ...
	I0827 23:18:39.245921   58472 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:39.251190   58472 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.392602   58472 pod_ready.go:93] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.392650   58472 pod_ready.go:82] duration metric: took 141.427119ms for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.392666   58472 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.792372   58472 pod_ready.go:93] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.792396   58472 pod_ready.go:82] duration metric: took 399.72282ms for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.792411   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.192860   58472 pod_ready.go:93] pod "kube-apiserver-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.192887   58472 pod_ready.go:82] duration metric: took 400.469277ms for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.192901   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.592927   58472 pod_ready.go:93] pod "kube-controller-manager-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.592950   58472 pod_ready.go:82] duration metric: took 400.041717ms for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.592960   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.992171   58472 pod_ready.go:93] pod "kube-proxy-8zvr2" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.992197   58472 pod_ready.go:82] duration metric: took 399.231287ms for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.992207   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:41.392610   58472 pod_ready.go:93] pod "kube-scheduler-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:41.392644   58472 pod_ready.go:82] duration metric: took 400.428696ms for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:41.392666   58472 pod_ready.go:39] duration metric: took 2.146722658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:41.392684   58472 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:18:41.392759   58472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:41.406493   58472 api_server.go:72] duration metric: took 2.379622546s to wait for apiserver process to appear ...
	I0827 23:18:41.406521   58472 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:18:41.406543   58472 api_server.go:253] Checking apiserver healthz at https://192.168.61.236:8443/healthz ...
	I0827 23:18:41.411786   58472 api_server.go:279] https://192.168.61.236:8443/healthz returned 200:
	ok
	I0827 23:18:41.412762   58472 api_server.go:141] control plane version: v1.31.0
	I0827 23:18:41.412781   58472 api_server.go:131] duration metric: took 6.253513ms to wait for apiserver health ...
	I0827 23:18:41.412788   58472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:18:41.594425   58472 system_pods.go:59] 6 kube-system pods found
	I0827 23:18:41.594453   58472 system_pods.go:61] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running
	I0827 23:18:41.594458   58472 system_pods.go:61] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running
	I0827 23:18:41.594461   58472 system_pods.go:61] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running
	I0827 23:18:41.594465   58472 system_pods.go:61] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running
	I0827 23:18:41.594469   58472 system_pods.go:61] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running
	I0827 23:18:41.594472   58472 system_pods.go:61] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running
	I0827 23:18:41.594477   58472 system_pods.go:74] duration metric: took 181.685169ms to wait for pod list to return data ...
	I0827 23:18:41.594483   58472 default_sa.go:34] waiting for default service account to be created ...
	I0827 23:18:41.791715   58472 default_sa.go:45] found service account: "default"
	I0827 23:18:41.791740   58472 default_sa.go:55] duration metric: took 197.251256ms for default service account to be created ...
	I0827 23:18:41.791749   58472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 23:18:41.993950   58472 system_pods.go:86] 6 kube-system pods found
	I0827 23:18:41.993983   58472 system_pods.go:89] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running
	I0827 23:18:41.993991   58472 system_pods.go:89] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running
	I0827 23:18:41.993997   58472 system_pods.go:89] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running
	I0827 23:18:41.994002   58472 system_pods.go:89] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running
	I0827 23:18:41.994012   58472 system_pods.go:89] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running
	I0827 23:18:41.994020   58472 system_pods.go:89] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running
	I0827 23:18:41.994028   58472 system_pods.go:126] duration metric: took 202.27467ms to wait for k8s-apps to be running ...
	I0827 23:18:41.994037   58472 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 23:18:41.994088   58472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:18:42.015521   58472 system_svc.go:56] duration metric: took 21.473265ms WaitForService to wait for kubelet
	I0827 23:18:42.015577   58472 kubeadm.go:582] duration metric: took 2.988711578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:18:42.015603   58472 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:18:42.192573   58472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:18:42.192596   58472 node_conditions.go:123] node cpu capacity is 2
	I0827 23:18:42.192615   58472 node_conditions.go:105] duration metric: took 177.000322ms to run NodePressure ...
	I0827 23:18:42.192626   58472 start.go:241] waiting for startup goroutines ...
	I0827 23:18:42.192636   58472 start.go:246] waiting for cluster config update ...
	I0827 23:18:42.192646   58472 start.go:255] writing updated cluster config ...
	I0827 23:18:42.192937   58472 ssh_runner.go:195] Run: rm -f paused
	I0827 23:18:42.240990   58472 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 23:18:42.243030   58472 out.go:177] * Done! kubectl is now configured to use "pause-677405" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.844793562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800722844761449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9256dca9-ad0c-4c7a-bbae-bf44587c9d42 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.845657387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=360ce646-9072-4bce-bf78-e73edffe1bca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.845749266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=360ce646-9072-4bce-bf78-e73edffe1bca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.846107851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=360ce646-9072-4bce-bf78-e73edffe1bca name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.890918760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37b92546-b6f6-4d6b-8313-e07840cf5289 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.891001048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37b92546-b6f6-4d6b-8313-e07840cf5289 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.892429718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fe1e2b3-2e22-4ac0-8ed0-1c36a8911151 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.892863640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800722892835348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fe1e2b3-2e22-4ac0-8ed0-1c36a8911151 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.893502653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b497aacf-c6a6-46e0-8d86-06c692bf2884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.893578351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b497aacf-c6a6-46e0-8d86-06c692bf2884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.893860842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b497aacf-c6a6-46e0-8d86-06c692bf2884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.940071224Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=beeae727-7833-4ab5-9bb1-3beedfae456d name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.940156642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=beeae727-7833-4ab5-9bb1-3beedfae456d name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.941583554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3513aece-dd5a-412e-9596-d04efff77ac1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.942033318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800722942006979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3513aece-dd5a-412e-9596-d04efff77ac1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.942511092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2a1555b-cfb4-4118-aa4d-3e74024b093e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.942583411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2a1555b-cfb4-4118-aa4d-3e74024b093e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.942839480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2a1555b-cfb4-4118-aa4d-3e74024b093e name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.996812785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd4347ec-a6fc-42e2-a382-5c30b030f011 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.996897201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd4347ec-a6fc-42e2-a382-5c30b030f011 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.998593208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebbf19b8-2e5a-4901-a9bd-3eb67e45324d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.999127372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800722999091562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebbf19b8-2e5a-4901-a9bd-3eb67e45324d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.999793025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f45d4a74-7682-4417-b313-32aa20301cff name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:42 pause-677405 crio[2066]: time="2024-08-27 23:18:42.999890096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f45d4a74-7682-4417-b313-32aa20301cff name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:43 pause-677405 crio[2066]: time="2024-08-27 23:18:43.000344445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f45d4a74-7682-4417-b313-32aa20301cff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	276b063d60628       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   20 seconds ago      Running             kube-proxy                2                   60715d8c5bb35       kube-proxy-8zvr2
	5d094c16ab559       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   2                   044b286455d0b       coredns-6f6b679f8f-6fhl7
	5bb8148d18081       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   24 seconds ago      Running             kube-controller-manager   2                   9046aef3b60eb       kube-controller-manager-pause-677405
	a0715e76c2155       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   24 seconds ago      Running             kube-apiserver            2                   fe4bf70c77a37       kube-apiserver-pause-677405
	0d68ff0be7c75       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   24 seconds ago      Running             kube-scheduler            2                   7232e1072e5f2       kube-scheduler-pause-677405
	b224c623ae1c5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   24 seconds ago      Running             etcd                      2                   a2fe03178a41e       etcd-pause-677405
	1c454a7d892f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago      Exited              coredns                   1                   044b286455d0b       coredns-6f6b679f8f-6fhl7
	a5b313ec95b7c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   47 seconds ago      Exited              kube-proxy                1                   60715d8c5bb35       kube-proxy-8zvr2
	f1ab81f15f39f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   47 seconds ago      Exited              kube-scheduler            1                   7232e1072e5f2       kube-scheduler-pause-677405
	cb90515bc2df7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   47 seconds ago      Exited              kube-apiserver            1                   fe4bf70c77a37       kube-apiserver-pause-677405
	8d5de0c2b7576       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   48 seconds ago      Exited              etcd                      1                   a2fe03178a41e       etcd-pause-677405
	bb5381154219f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   48 seconds ago      Exited              kube-controller-manager   1                   9046aef3b60eb       kube-controller-manager-pause-677405
	
	
	==> coredns [1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49136 - 22056 "HINFO IN 3105619649916461629.8965712088103655971. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019234492s
	
	
	==> coredns [5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42669 - 33973 "HINFO IN 6595555225940235633.2868214675891026286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008226435s
	
	
	==> describe nodes <==
	Name:               pause-677405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-677405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=pause-677405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T23_17_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-677405
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.236
	  Hostname:    pause-677405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5eb4d9e5deb045c5a3cc7608567a9add
	  System UUID:                5eb4d9e5-deb0-45c5-a3cc-7608567a9add
	  Boot ID:                    16244681-2964-4854-987c-3affeaff866d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6fhl7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     74s
	  kube-system                 etcd-pause-677405                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         79s
	  kube-system                 kube-apiserver-pause-677405             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-677405    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-8zvr2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-677405             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 44s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeReady                79s                kubelet          Node pause-677405 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  79s                kubelet          Node pause-677405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet          Node pause-677405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                kubelet          Node pause-677405 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	  Normal  RegisteredNode           41s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-677405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-677405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-677405 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.294532] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.060758] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058025] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.165312] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.151809] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.253806] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.759209] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.258294] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.071521] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999636] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.097817] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.825254] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.834844] kauditd_printk_skb: 46 callbacks suppressed
	[ +22.720103] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.072663] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.052504] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
	[  +0.173926] systemd-fstab-generator[2016]: Ignoring "noauto" option for root device
	[  +0.131902] systemd-fstab-generator[2028]: Ignoring "noauto" option for root device
	[  +0.270021] systemd-fstab-generator[2056]: Ignoring "noauto" option for root device
	[  +2.848127] systemd-fstab-generator[2610]: Ignoring "noauto" option for root device
	[  +3.382510] kauditd_printk_skb: 195 callbacks suppressed
	[Aug27 23:18] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +5.167489] kauditd_printk_skb: 53 callbacks suppressed
	[ +15.733904] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	
	
	==> etcd [8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a] <==
	{"level":"info","ts":"2024-08-27T23:17:57.266973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T23:17:57.267003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgPreVoteResp from 282aa318c6d47fc7 at term 2"}
	{"level":"info","ts":"2024-08-27T23:17:57.267027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgVoteResp from 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 282aa318c6d47fc7 elected leader 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.268271Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"282aa318c6d47fc7","local-member-attributes":"{Name:pause-677405 ClientURLs:[https://192.168.61.236:2379]}","request-path":"/0/members/282aa318c6d47fc7/attributes","cluster-id":"9c3196b6b50a570e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:17:57.268370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:17:57.268444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:17:57.268829Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:17:57.268877Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T23:17:57.269777Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:17:57.270729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.236:2379"}
	{"level":"info","ts":"2024-08-27T23:17:57.271893Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:17:57.272981Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:18:16.594033Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-27T23:18:16.594125Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-677405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"]}
	{"level":"warn","ts":"2024-08-27T23:18:16.594254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.594292Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.596048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.236:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.596118Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.236:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T23:18:16.596179Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"282aa318c6d47fc7","current-leader-member-id":"282aa318c6d47fc7"}
	{"level":"info","ts":"2024-08-27T23:18:16.600424Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:16.600644Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:16.600674Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-677405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"]}
	
	
	==> etcd [b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43] <==
	{"level":"info","ts":"2024-08-27T23:18:19.364623Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c3196b6b50a570e","local-member-id":"282aa318c6d47fc7","added-peer-id":"282aa318c6d47fc7","added-peer-peer-urls":["https://192.168.61.236:2380"]}
	{"level":"info","ts":"2024-08-27T23:18:19.364761Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c3196b6b50a570e","local-member-id":"282aa318c6d47fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:18:19.364819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:18:19.366592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:19.383742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T23:18:19.383909Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:19.384045Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:19.389528Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"282aa318c6d47fc7","initial-advertise-peer-urls":["https://192.168.61.236:2380"],"listen-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T23:18:19.392220Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:18:20.322274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgPreVoteResp from 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgVoteResp from 282aa318c6d47fc7 at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became leader at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 282aa318c6d47fc7 elected leader 282aa318c6d47fc7 at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.331558Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"282aa318c6d47fc7","local-member-attributes":"{Name:pause-677405 ClientURLs:[https://192.168.61.236:2379]}","request-path":"/0/members/282aa318c6d47fc7/attributes","cluster-id":"9c3196b6b50a570e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:18:20.331824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:18:20.332369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:18:20.336924Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:20.344064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:18:20.351839Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:20.357697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.236:2379"}
	{"level":"info","ts":"2024-08-27T23:18:20.358267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:18:20.358299Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:18:43 up 1 min,  0 users,  load average: 1.34, 0.53, 0.19
	Linux pause-677405 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87] <==
	I0827 23:18:22.115297       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 23:18:22.115328       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 23:18:22.115433       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 23:18:22.116418       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 23:18:22.125838       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 23:18:22.126337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 23:18:22.132734       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 23:18:22.147263       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 23:18:22.151392       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 23:18:22.151433       1 aggregator.go:171] initial CRD sync complete...
	I0827 23:18:22.151446       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 23:18:22.151452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 23:18:22.151456       1 cache.go:39] Caches are synced for autoregister controller
	I0827 23:18:22.184435       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 23:18:22.184527       1 policy_source.go:224] refreshing policies
	I0827 23:18:22.232987       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 23:18:23.025294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0827 23:18:23.343751       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.236]
	I0827 23:18:23.345889       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 23:18:23.352434       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 23:18:23.785616       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 23:18:23.811448       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 23:18:23.856909       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 23:18:23.899962       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 23:18:23.907769       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39] <==
	I0827 23:18:06.528382       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0827 23:18:06.528399       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0827 23:18:06.528429       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0827 23:18:06.528442       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0827 23:18:06.528452       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0827 23:18:06.528466       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0827 23:18:06.528496       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0827 23:18:06.528564       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0827 23:18:06.528595       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0827 23:18:06.528624       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0827 23:18:06.528931       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0827 23:18:06.528988       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 23:18:06.529130       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 23:18:06.529291       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 23:18:06.529347       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0827 23:18:06.529583       1 controller.go:157] Shutting down quota evaluator
	I0827 23:18:06.529679       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.530879       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0827 23:18:06.531013       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0827 23:18:06.531296       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0827 23:18:06.531396       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 23:18:06.531496       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531615       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531659       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531664       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5] <==
	I0827 23:18:25.454249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0827 23:18:25.454424       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0827 23:18:25.454519       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0827 23:18:25.455432       1 shared_informer.go:320] Caches are synced for service account
	I0827 23:18:25.455479       1 shared_informer.go:320] Caches are synced for deployment
	I0827 23:18:25.454220       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0827 23:18:25.458629       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0827 23:18:25.458753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.879µs"
	I0827 23:18:25.459419       1 shared_informer.go:320] Caches are synced for cronjob
	I0827 23:18:25.463648       1 shared_informer.go:320] Caches are synced for TTL
	I0827 23:18:25.550356       1 shared_informer.go:320] Caches are synced for expand
	I0827 23:18:25.551277       1 shared_informer.go:320] Caches are synced for persistent volume
	I0827 23:18:25.552654       1 shared_informer.go:320] Caches are synced for ephemeral
	I0827 23:18:25.557088       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:25.567233       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 23:18:25.601894       1 shared_informer.go:320] Caches are synced for stateful set
	I0827 23:18:25.601930       1 shared_informer.go:320] Caches are synced for PVC protection
	I0827 23:18:25.629684       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:25.644964       1 shared_informer.go:320] Caches are synced for HPA
	I0827 23:18:25.647400       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0827 23:18:26.082858       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:26.102020       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:26.102173       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 23:18:26.898999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="27.049421ms"
	I0827 23:18:26.899274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="172.994µs"
	
	
	==> kube-controller-manager [bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa] <==
	I0827 23:18:02.123385       1 shared_informer.go:320] Caches are synced for TTL
	I0827 23:18:02.137031       1 shared_informer.go:320] Caches are synced for PVC protection
	I0827 23:18:02.138260       1 shared_informer.go:320] Caches are synced for HPA
	I0827 23:18:02.138312       1 shared_informer.go:320] Caches are synced for daemon sets
	I0827 23:18:02.139611       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0827 23:18:02.139857       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0827 23:18:02.139911       1 shared_informer.go:320] Caches are synced for endpoint
	I0827 23:18:02.139971       1 shared_informer.go:320] Caches are synced for persistent volume
	I0827 23:18:02.140109       1 shared_informer.go:320] Caches are synced for ephemeral
	I0827 23:18:02.140832       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0827 23:18:02.150249       1 shared_informer.go:320] Caches are synced for taint
	I0827 23:18:02.150392       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0827 23:18:02.150499       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-677405"
	I0827 23:18:02.150565       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0827 23:18:02.154452       1 shared_informer.go:320] Caches are synced for job
	I0827 23:18:02.157284       1 shared_informer.go:320] Caches are synced for deployment
	I0827 23:18:02.161274       1 shared_informer.go:320] Caches are synced for disruption
	I0827 23:18:02.189262       1 shared_informer.go:320] Caches are synced for stateful set
	I0827 23:18:02.189379       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0827 23:18:02.252875       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 23:18:02.302505       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:02.317532       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:02.702049       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:02.702092       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 23:18:02.729829       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 23:18:23.002348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 23:18:23.013613       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.236"]
	E0827 23:18:23.013964       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:18:23.077381       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 23:18:23.077423       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 23:18:23.077471       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:18:23.082845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:18:23.083120       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:18:23.083148       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:18:23.095025       1 config.go:197] "Starting service config controller"
	I0827 23:18:23.095065       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:18:23.095085       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:18:23.095096       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:18:23.095592       1 config.go:326] "Starting node config controller"
	I0827 23:18:23.095620       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:18:23.195286       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 23:18:23.195369       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:18:23.197260       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 23:17:57.111292       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 23:17:58.898328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.236"]
	E0827 23:17:58.898439       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:17:59.008509       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 23:17:59.008609       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 23:17:59.008646       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:17:59.012952       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:17:59.013309       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:17:59.013472       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:17:59.017176       1 config.go:197] "Starting service config controller"
	I0827 23:17:59.017291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:17:59.017340       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:17:59.017357       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:17:59.017754       1 config.go:326] "Starting node config controller"
	I0827 23:17:59.017789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:17:59.117972       1 shared_informer.go:320] Caches are synced for node config
	I0827 23:17:59.118089       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:17:59.118102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a] <==
	I0827 23:18:20.741367       1 serving.go:386] Generated self-signed cert in-memory
	I0827 23:18:22.153552       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:18:22.153586       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:18:22.158447       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0827 23:18:22.158590       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0827 23:18:22.158696       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:18:22.158756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:18:22.158790       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0827 23:18:22.158813       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0827 23:18:22.159921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 23:18:22.160042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 23:18:22.258817       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0827 23:18:22.259336       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0827 23:18:22.259477       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7] <==
	I0827 23:17:57.044602       1 serving.go:386] Generated self-signed cert in-memory
	W0827 23:17:58.726776       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 23:17:58.726822       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 23:17:58.726836       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:17:58.726845       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:17:58.882132       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:17:58.882171       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:17:58.889359       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:17:58.889508       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:17:58.890027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 23:17:58.890138       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 23:17:58.990525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:18:06.462033       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0827 23:18:06.462552       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.882335    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: E0827 23:18:18.883305    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.236:8443: connect: connection refused" node="pause-677405"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.914972    3045 scope.go:117] "RemoveContainer" containerID="cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.915345    3045 scope.go:117] "RemoveContainer" containerID="8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.917167    3045 scope.go:117] "RemoveContainer" containerID="bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.919150    3045 scope.go:117] "RemoveContainer" containerID="f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: E0827 23:18:19.085935    3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-677405?timeout=10s\": dial tcp 192.168.61.236:8443: connect: connection refused" interval="800ms"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: I0827 23:18:19.285126    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: E0827 23:18:19.286087    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.236:8443: connect: connection refused" node="pause-677405"
	Aug 27 23:18:20 pause-677405 kubelet[3045]: I0827 23:18:20.088104    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269458    3045 kubelet_node_status.go:111] "Node was previously registered" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269555    3045 kubelet_node_status.go:75] "Successfully registered node" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269582    3045 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.270588    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.459993    3045 apiserver.go:52] "Watching apiserver"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.484029    3045 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.525485    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52158293-b0ab-4cd9-b8a5-457017d195e3-xtables-lock\") pod \"kube-proxy-8zvr2\" (UID: \"52158293-b0ab-4cd9-b8a5-457017d195e3\") " pod="kube-system/kube-proxy-8zvr2"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.525622    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52158293-b0ab-4cd9-b8a5-457017d195e3-lib-modules\") pod \"kube-proxy-8zvr2\" (UID: \"52158293-b0ab-4cd9-b8a5-457017d195e3\") " pod="kube-system/kube-proxy-8zvr2"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.764779    3045 scope.go:117] "RemoveContainer" containerID="1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.765005    3045 scope.go:117] "RemoveContainer" containerID="a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928"
	Aug 27 23:18:26 pause-677405 kubelet[3045]: I0827 23:18:26.853692    3045 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 27 23:18:28 pause-677405 kubelet[3045]: E0827 23:18:28.583375    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800708582872764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:28 pause-677405 kubelet[3045]: E0827 23:18:28.583452    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800708582872764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:38 pause-677405 kubelet[3045]: E0827 23:18:38.585768    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800718585103558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:38 pause-677405 kubelet[3045]: E0827 23:18:38.585815    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800718585103558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677405 -n pause-677405
helpers_test.go:261: (dbg) Run:  kubectl --context pause-677405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-677405 -n pause-677405
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-677405 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-677405 logs -n 25: (1.305726701s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo docker                         | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo cat                            | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo                                | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo find                           | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-409668 sudo crio                           | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-409668                                     | cilium-409668          | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC | 27 Aug 24 23:18 UTC |
	| start   | -p old-k8s-version-686432                            | old-k8s-version-686432 | jenkins | v1.33.1 | 27 Aug 24 23:18 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:18:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:18:24.962537   61383 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:18:24.963009   61383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:18:24.963027   61383 out.go:358] Setting ErrFile to fd 2...
	I0827 23:18:24.963035   61383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:18:24.963480   61383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 23:18:24.964565   61383 out.go:352] Setting JSON to false
	I0827 23:18:24.965548   61383 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7252,"bootTime":1724793453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 23:18:24.965615   61383 start.go:139] virtualization: kvm guest
	I0827 23:18:24.967375   61383 out.go:177] * [old-k8s-version-686432] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 23:18:24.968869   61383 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:18:24.968876   61383 notify.go:220] Checking for updates...
	I0827 23:18:24.971021   61383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:18:24.972176   61383 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:18:24.973419   61383 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:24.974596   61383 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 23:18:24.975887   61383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:18:24.977679   61383 config.go:182] Loaded profile config "cert-expiration-649861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:24.977820   61383 config.go:182] Loaded profile config "kubernetes-upgrade-772694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0827 23:18:24.978022   61383 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:24.978145   61383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:18:25.014657   61383 out.go:177] * Using the kvm2 driver based on user configuration
	I0827 23:18:25.015923   61383 start.go:297] selected driver: kvm2
	I0827 23:18:25.015947   61383 start.go:901] validating driver "kvm2" against <nil>
	I0827 23:18:25.015958   61383 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:18:25.016744   61383 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:18:25.016834   61383 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 23:18:25.033267   61383 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 23:18:25.033324   61383 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:18:25.033529   61383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:18:25.033598   61383 cni.go:84] Creating CNI manager for ""
	I0827 23:18:25.033610   61383 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:18:25.033619   61383 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 23:18:25.033669   61383 start.go:340] cluster config:
	{Name:old-k8s-version-686432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:18:25.033780   61383 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:18:25.035405   61383 out.go:177] * Starting "old-k8s-version-686432" primary control-plane node in "old-k8s-version-686432" cluster
	I0827 23:18:25.036531   61383 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 23:18:25.036572   61383 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0827 23:18:25.036583   61383 cache.go:56] Caching tarball of preloaded images
	I0827 23:18:25.036674   61383 preload.go:172] Found /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0827 23:18:25.036688   61383 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0827 23:18:25.036806   61383 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/old-k8s-version-686432/config.json ...
	I0827 23:18:25.036828   61383 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/old-k8s-version-686432/config.json: {Name:mk0e7629933894ffa318f693efc58312bc76bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:25.036992   61383 start.go:360] acquireMachinesLock for old-k8s-version-686432: {Name:mkb6c8ce63bfdfcb0aa647b066a810c75267cb4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0827 23:18:25.037039   61383 start.go:364] duration metric: took 27.546µs to acquireMachinesLock for "old-k8s-version-686432"
	I0827 23:18:25.037063   61383 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-686432 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686432 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:18:25.037136   61383 start.go:125] createHost starting for "" (driver="kvm2")
	I0827 23:18:23.603434   58472 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 23:18:23.616086   58472 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 23:18:23.635496   58472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:18:23.635591   58472 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0827 23:18:23.635617   58472 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0827 23:18:23.649209   58472 system_pods.go:59] 6 kube-system pods found
	I0827 23:18:23.649237   58472 system_pods.go:61] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0827 23:18:23.649244   58472 system_pods.go:61] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0827 23:18:23.649250   58472 system_pods.go:61] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0827 23:18:23.649262   58472 system_pods.go:61] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0827 23:18:23.649269   58472 system_pods.go:61] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0827 23:18:23.649274   58472 system_pods.go:61] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0827 23:18:23.649282   58472 system_pods.go:74] duration metric: took 13.763924ms to wait for pod list to return data ...
	I0827 23:18:23.649288   58472 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:18:23.654810   58472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:18:23.654840   58472 node_conditions.go:123] node cpu capacity is 2
	I0827 23:18:23.654853   58472 node_conditions.go:105] duration metric: took 5.560712ms to run NodePressure ...
	I0827 23:18:23.654872   58472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:23.961941   58472 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0827 23:18:23.968001   58472 kubeadm.go:739] kubelet initialised
	I0827 23:18:23.968026   58472 kubeadm.go:740] duration metric: took 6.060012ms waiting for restarted kubelet to initialise ...
	I0827 23:18:23.968033   58472 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:23.972798   58472 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:25.980592   58472 pod_ready.go:103] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:26.981229   58472 pod_ready.go:93] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:26.981254   58472 pod_ready.go:82] duration metric: took 3.00843116s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:26.981265   58472 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:28.703524   57186 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0827 23:18:28.703638   57186 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0827 23:18:28.705497   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:28.705571   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:28.705694   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:28.705809   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:28.705934   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:28.705999   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:28.707739   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:28.707823   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:28.707896   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:28.707970   57186 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 23:18:28.708018   57186 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 23:18:28.708093   57186 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 23:18:28.708141   57186 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 23:18:28.708228   57186 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 23:18:28.708405   57186 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708498   57186 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 23:18:28.708676   57186 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	I0827 23:18:28.708801   57186 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 23:18:28.708907   57186 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 23:18:28.708973   57186 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 23:18:28.709048   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:28.709116   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:28.709179   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:28.709247   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:28.709319   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:28.709449   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:28.709579   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:28.709632   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:28.709707   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:25.038555   61383 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0827 23:18:25.038685   61383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:25.038720   61383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:25.053716   61383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0827 23:18:25.054219   61383 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:18:25.054815   61383 main.go:141] libmachine: Using API Version  1
	I0827 23:18:25.054857   61383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:18:25.055278   61383 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:18:25.055516   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .GetMachineName
	I0827 23:18:25.055690   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .DriverName
	I0827 23:18:25.055851   61383 start.go:159] libmachine.API.Create for "old-k8s-version-686432" (driver="kvm2")
	I0827 23:18:25.055883   61383 client.go:168] LocalClient.Create starting
	I0827 23:18:25.055921   61383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/ca.pem
	I0827 23:18:25.055963   61383 main.go:141] libmachine: Decoding PEM data...
	I0827 23:18:25.055987   61383 main.go:141] libmachine: Parsing certificate...
	I0827 23:18:25.056063   61383 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19522-7571/.minikube/certs/cert.pem
	I0827 23:18:25.056083   61383 main.go:141] libmachine: Decoding PEM data...
	I0827 23:18:25.056094   61383 main.go:141] libmachine: Parsing certificate...
	I0827 23:18:25.056107   61383 main.go:141] libmachine: Running pre-create checks...
	I0827 23:18:25.056121   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .PreCreateCheck
	I0827 23:18:25.056596   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .GetConfigRaw
	I0827 23:18:25.057053   61383 main.go:141] libmachine: Creating machine...
	I0827 23:18:25.057074   61383 main.go:141] libmachine: (old-k8s-version-686432) Calling .Create
	I0827 23:18:25.057240   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating KVM machine...
	I0827 23:18:25.058600   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | found existing default KVM network
	I0827 23:18:25.060077   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.059942   61405 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0827 23:18:25.060100   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | created network xml: 
	I0827 23:18:25.060113   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | <network>
	I0827 23:18:25.060122   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <name>mk-old-k8s-version-686432</name>
	I0827 23:18:25.060131   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <dns enable='no'/>
	I0827 23:18:25.060141   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   
	I0827 23:18:25.060151   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0827 23:18:25.060165   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |     <dhcp>
	I0827 23:18:25.060206   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0827 23:18:25.060232   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |     </dhcp>
	I0827 23:18:25.060251   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   </ip>
	I0827 23:18:25.060260   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG |   
	I0827 23:18:25.060266   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | </network>
	I0827 23:18:25.060271   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | 
	I0827 23:18:25.065399   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | trying to create private KVM network mk-old-k8s-version-686432 192.168.39.0/24...
	I0827 23:18:25.136738   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | private KVM network mk-old-k8s-version-686432 192.168.39.0/24 created
	I0827 23:18:25.136786   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.136677   61405 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:25.136800   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting up store path in /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 ...
	I0827 23:18:25.136847   61383 main.go:141] libmachine: (old-k8s-version-686432) Building disk image from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 23:18:25.136882   61383 main.go:141] libmachine: (old-k8s-version-686432) Downloading /home/jenkins/minikube-integration/19522-7571/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso...
	I0827 23:18:25.418963   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.418831   61405 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/id_rsa...
	I0827 23:18:25.626647   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.626487   61405 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/old-k8s-version-686432.rawdisk...
	I0827 23:18:25.626685   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Writing magic tar header
	I0827 23:18:25.626706   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Writing SSH key tar header
	I0827 23:18:25.626718   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:25.626654   61405 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 ...
	I0827 23:18:25.626808   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432
	I0827 23:18:25.626834   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube/machines
	I0827 23:18:25.626849   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432 (perms=drwx------)
	I0827 23:18:25.626886   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 23:18:25.626912   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube/machines (perms=drwxr-xr-x)
	I0827 23:18:25.626930   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19522-7571
	I0827 23:18:25.626947   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0827 23:18:25.626957   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home/jenkins
	I0827 23:18:25.626987   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Checking permissions on dir: /home
	I0827 23:18:25.627004   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | Skipping /home - not owner
	I0827 23:18:25.627017   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571/.minikube (perms=drwxr-xr-x)
	I0827 23:18:25.627032   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration/19522-7571 (perms=drwxrwxr-x)
	I0827 23:18:25.627045   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0827 23:18:25.627085   61383 main.go:141] libmachine: (old-k8s-version-686432) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0827 23:18:25.627104   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating domain...
	I0827 23:18:25.628237   61383 main.go:141] libmachine: (old-k8s-version-686432) define libvirt domain using xml: 
	I0827 23:18:25.628262   61383 main.go:141] libmachine: (old-k8s-version-686432) <domain type='kvm'>
	I0827 23:18:25.628284   61383 main.go:141] libmachine: (old-k8s-version-686432)   <name>old-k8s-version-686432</name>
	I0827 23:18:25.628301   61383 main.go:141] libmachine: (old-k8s-version-686432)   <memory unit='MiB'>2200</memory>
	I0827 23:18:25.628310   61383 main.go:141] libmachine: (old-k8s-version-686432)   <vcpu>2</vcpu>
	I0827 23:18:25.628320   61383 main.go:141] libmachine: (old-k8s-version-686432)   <features>
	I0827 23:18:25.628340   61383 main.go:141] libmachine: (old-k8s-version-686432)     <acpi/>
	I0827 23:18:25.628351   61383 main.go:141] libmachine: (old-k8s-version-686432)     <apic/>
	I0827 23:18:25.628369   61383 main.go:141] libmachine: (old-k8s-version-686432)     <pae/>
	I0827 23:18:25.628385   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.628426   61383 main.go:141] libmachine: (old-k8s-version-686432)   </features>
	I0827 23:18:25.628450   61383 main.go:141] libmachine: (old-k8s-version-686432)   <cpu mode='host-passthrough'>
	I0827 23:18:25.628479   61383 main.go:141] libmachine: (old-k8s-version-686432)   
	I0827 23:18:25.628493   61383 main.go:141] libmachine: (old-k8s-version-686432)   </cpu>
	I0827 23:18:25.628506   61383 main.go:141] libmachine: (old-k8s-version-686432)   <os>
	I0827 23:18:25.628516   61383 main.go:141] libmachine: (old-k8s-version-686432)     <type>hvm</type>
	I0827 23:18:25.628527   61383 main.go:141] libmachine: (old-k8s-version-686432)     <boot dev='cdrom'/>
	I0827 23:18:25.628535   61383 main.go:141] libmachine: (old-k8s-version-686432)     <boot dev='hd'/>
	I0827 23:18:25.628545   61383 main.go:141] libmachine: (old-k8s-version-686432)     <bootmenu enable='no'/>
	I0827 23:18:25.628555   61383 main.go:141] libmachine: (old-k8s-version-686432)   </os>
	I0827 23:18:25.628578   61383 main.go:141] libmachine: (old-k8s-version-686432)   <devices>
	I0827 23:18:25.628597   61383 main.go:141] libmachine: (old-k8s-version-686432)     <disk type='file' device='cdrom'>
	I0827 23:18:25.628648   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/boot2docker.iso'/>
	I0827 23:18:25.628675   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target dev='hdc' bus='scsi'/>
	I0827 23:18:25.628687   61383 main.go:141] libmachine: (old-k8s-version-686432)       <readonly/>
	I0827 23:18:25.628699   61383 main.go:141] libmachine: (old-k8s-version-686432)     </disk>
	I0827 23:18:25.628715   61383 main.go:141] libmachine: (old-k8s-version-686432)     <disk type='file' device='disk'>
	I0827 23:18:25.628728   61383 main.go:141] libmachine: (old-k8s-version-686432)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0827 23:18:25.628744   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source file='/home/jenkins/minikube-integration/19522-7571/.minikube/machines/old-k8s-version-686432/old-k8s-version-686432.rawdisk'/>
	I0827 23:18:25.628755   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target dev='hda' bus='virtio'/>
	I0827 23:18:25.628766   61383 main.go:141] libmachine: (old-k8s-version-686432)     </disk>
	I0827 23:18:25.628780   61383 main.go:141] libmachine: (old-k8s-version-686432)     <interface type='network'>
	I0827 23:18:25.628792   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source network='mk-old-k8s-version-686432'/>
	I0827 23:18:25.628804   61383 main.go:141] libmachine: (old-k8s-version-686432)       <model type='virtio'/>
	I0827 23:18:25.628817   61383 main.go:141] libmachine: (old-k8s-version-686432)     </interface>
	I0827 23:18:25.628829   61383 main.go:141] libmachine: (old-k8s-version-686432)     <interface type='network'>
	I0827 23:18:25.628841   61383 main.go:141] libmachine: (old-k8s-version-686432)       <source network='default'/>
	I0827 23:18:25.628857   61383 main.go:141] libmachine: (old-k8s-version-686432)       <model type='virtio'/>
	I0827 23:18:25.628869   61383 main.go:141] libmachine: (old-k8s-version-686432)     </interface>
	I0827 23:18:25.628878   61383 main.go:141] libmachine: (old-k8s-version-686432)     <serial type='pty'>
	I0827 23:18:25.628889   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target port='0'/>
	I0827 23:18:25.628898   61383 main.go:141] libmachine: (old-k8s-version-686432)     </serial>
	I0827 23:18:25.628907   61383 main.go:141] libmachine: (old-k8s-version-686432)     <console type='pty'>
	I0827 23:18:25.628917   61383 main.go:141] libmachine: (old-k8s-version-686432)       <target type='serial' port='0'/>
	I0827 23:18:25.628938   61383 main.go:141] libmachine: (old-k8s-version-686432)     </console>
	I0827 23:18:25.628960   61383 main.go:141] libmachine: (old-k8s-version-686432)     <rng model='virtio'>
	I0827 23:18:25.628986   61383 main.go:141] libmachine: (old-k8s-version-686432)       <backend model='random'>/dev/random</backend>
	I0827 23:18:25.629005   61383 main.go:141] libmachine: (old-k8s-version-686432)     </rng>
	I0827 23:18:25.629016   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.629026   61383 main.go:141] libmachine: (old-k8s-version-686432)     
	I0827 23:18:25.629035   61383 main.go:141] libmachine: (old-k8s-version-686432)   </devices>
	I0827 23:18:25.629046   61383 main.go:141] libmachine: (old-k8s-version-686432) </domain>
	I0827 23:18:25.629058   61383 main.go:141] libmachine: (old-k8s-version-686432) 
	I0827 23:18:25.632695   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:ce:a1:37 in network default
	I0827 23:18:25.633520   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring networks are active...
	I0827 23:18:25.633543   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:25.634386   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring network default is active
	I0827 23:18:25.634881   61383 main.go:141] libmachine: (old-k8s-version-686432) Ensuring network mk-old-k8s-version-686432 is active
	I0827 23:18:25.635623   61383 main.go:141] libmachine: (old-k8s-version-686432) Getting domain xml...
	I0827 23:18:25.636674   61383 main.go:141] libmachine: (old-k8s-version-686432) Creating domain...
	I0827 23:18:26.871714   61383 main.go:141] libmachine: (old-k8s-version-686432) Waiting to get IP...
	I0827 23:18:26.872769   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:26.873346   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:26.873375   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:26.873313   61405 retry.go:31] will retry after 243.262082ms: waiting for machine to come up
	I0827 23:18:27.118726   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.119381   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.119410   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.119334   61405 retry.go:31] will retry after 267.892861ms: waiting for machine to come up
	I0827 23:18:27.388778   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.389491   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.389519   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.389434   61405 retry.go:31] will retry after 383.832728ms: waiting for machine to come up
	I0827 23:18:27.774616   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:27.775092   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:27.775122   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:27.775054   61405 retry.go:31] will retry after 438.087912ms: waiting for machine to come up
	I0827 23:18:28.214564   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:28.215161   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:28.215200   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:28.215076   61405 retry.go:31] will retry after 486.657449ms: waiting for machine to come up
	I0827 23:18:28.703942   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:28.704484   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:28.704517   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:28.704410   61405 retry.go:31] will retry after 871.633482ms: waiting for machine to come up
	I0827 23:18:29.577366   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:29.577864   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:29.577904   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:29.577825   61405 retry.go:31] will retry after 806.704378ms: waiting for machine to come up
	I0827 23:18:28.711322   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:28.711397   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:28.711465   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:28.711523   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:28.711593   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:28.711779   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:18:28.711847   57186 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0827 23:18:28.711926   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712158   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712221   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712370   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712442   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712643   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712710   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.712912   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.712987   57186 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0827 23:18:28.713213   57186 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0827 23:18:28.713256   57186 kubeadm.go:310] 
	I0827 23:18:28.713308   57186 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0827 23:18:28.713364   57186 kubeadm.go:310] 		timed out waiting for the condition
	I0827 23:18:28.713377   57186 kubeadm.go:310] 
	I0827 23:18:28.713433   57186 kubeadm.go:310] 	This error is likely caused by:
	I0827 23:18:28.713494   57186 kubeadm.go:310] 		- The kubelet is not running
	I0827 23:18:28.713606   57186 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0827 23:18:28.713616   57186 kubeadm.go:310] 
	I0827 23:18:28.713724   57186 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0827 23:18:28.713768   57186 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0827 23:18:28.713821   57186 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0827 23:18:28.713832   57186 kubeadm.go:310] 
	I0827 23:18:28.714013   57186 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0827 23:18:28.714104   57186 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0827 23:18:28.714126   57186 kubeadm.go:310] 
	I0827 23:18:28.714244   57186 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0827 23:18:28.714343   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0827 23:18:28.714430   57186 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0827 23:18:28.714511   57186 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0827 23:18:28.714535   57186 kubeadm.go:310] 
	W0827 23:18:28.714652   57186 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-772694 localhost] and IPs [192.168.83.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0827 23:18:28.714696   57186 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0827 23:18:29.898172   57186 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.183448402s)
	I0827 23:18:29.898262   57186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:18:29.912178   57186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:18:29.921912   57186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:18:29.921942   57186 kubeadm.go:157] found existing configuration files:
	
	I0827 23:18:29.921995   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:18:29.931061   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:18:29.931129   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:18:29.940344   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:18:29.949652   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:18:29.949722   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:18:29.960201   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.969231   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:18:29.969282   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:18:29.978533   57186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:18:29.988747   57186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:18:29.988810   57186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:18:29.999648   57186 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0827 23:18:30.063431   57186 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0827 23:18:30.063498   57186 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:18:30.207367   57186 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:18:30.207495   57186 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:18:30.207594   57186 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0827 23:18:30.407332   57186 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:18:30.409199   57186 out.go:235]   - Generating certificates and keys ...
	I0827 23:18:30.409316   57186 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:18:30.409433   57186 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:18:30.409544   57186 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0827 23:18:30.409633   57186 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0827 23:18:30.409829   57186 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0827 23:18:30.409940   57186 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0827 23:18:30.410064   57186 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0827 23:18:30.410165   57186 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0827 23:18:30.410233   57186 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0827 23:18:30.410330   57186 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0827 23:18:30.410404   57186 kubeadm.go:310] [certs] Using the existing "sa" key
	I0827 23:18:30.410512   57186 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:18:30.487640   57186 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:18:30.697348   57186 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:18:30.969478   57186 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:18:31.147504   57186 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:18:31.166119   57186 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:18:31.167645   57186 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:18:31.167719   57186 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:18:31.310340   57186 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:18:31.312219   57186 out.go:235]   - Booting up control plane ...
	I0827 23:18:31.312363   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:18:31.317236   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:18:31.318847   57186 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:18:31.319995   57186 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:18:31.322520   57186 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0827 23:18:28.987395   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:30.988548   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:30.386569   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:30.387080   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:30.387109   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:30.387027   61405 retry.go:31] will retry after 1.235203034s: waiting for machine to come up
	I0827 23:18:31.623266   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:31.623783   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:31.623810   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:31.623742   61405 retry.go:31] will retry after 1.565691941s: waiting for machine to come up
	I0827 23:18:33.190759   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:33.191298   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:33.191322   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:33.191254   61405 retry.go:31] will retry after 1.717801152s: waiting for machine to come up
	I0827 23:18:34.911109   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:34.911682   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:34.911719   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:34.911585   61405 retry.go:31] will retry after 2.83421675s: waiting for machine to come up
	I0827 23:18:33.488446   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:35.987291   58472 pod_ready.go:103] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"False"
	I0827 23:18:36.987883   58472 pod_ready.go:93] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:36.987913   58472 pod_ready.go:82] duration metric: took 10.006638814s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:36.987928   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:38.995520   58472 pod_ready.go:93] pod "kube-apiserver-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:38.995546   58472 pod_ready.go:82] duration metric: took 2.007609842s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:38.995556   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.001525   58472 pod_ready.go:93] pod "kube-controller-manager-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.001553   58472 pod_ready.go:82] duration metric: took 5.989835ms for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.001565   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.007361   58472 pod_ready.go:93] pod "kube-proxy-8zvr2" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.007387   58472 pod_ready.go:82] duration metric: took 5.814948ms for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.007399   58472 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.012508   58472 pod_ready.go:93] pod "kube-scheduler-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.012535   58472 pod_ready.go:82] duration metric: took 5.126848ms for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.012548   58472 pod_ready.go:39] duration metric: took 15.044505134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:39.012567   58472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 23:18:39.025655   58472 ops.go:34] apiserver oom_adj: -16
	I0827 23:18:39.025679   58472 kubeadm.go:597] duration metric: took 42.387767372s to restartPrimaryControlPlane
	I0827 23:18:39.025689   58472 kubeadm.go:394] duration metric: took 42.499074194s to StartCluster
	I0827 23:18:39.025709   58472 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:39.025795   58472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:18:39.026582   58472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:39.026837   58472 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.236 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:18:39.026884   58472 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:18:39.027066   58472 config.go:182] Loaded profile config "pause-677405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:39.028278   58472 out.go:177] * Verifying Kubernetes components...
	I0827 23:18:39.028293   58472 out.go:177] * Enabled addons: 
	I0827 23:18:37.749556   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | domain old-k8s-version-686432 has defined MAC address 52:54:00:13:8b:af in network mk-old-k8s-version-686432
	I0827 23:18:37.750082   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | unable to find current IP address of domain old-k8s-version-686432 in network mk-old-k8s-version-686432
	I0827 23:18:37.750112   61383 main.go:141] libmachine: (old-k8s-version-686432) DBG | I0827 23:18:37.750034   61405 retry.go:31] will retry after 3.605421651s: waiting for machine to come up
	I0827 23:18:37.424633   59072 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c 4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7 2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8 79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815 2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f 1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225 c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb ff6fba7d4cd567c87fbf4331c6d0410ff65f9c87af20b9d26bc62bd65968fbca 4c7a5b5e55b4cadc68409de4b4cd00a9aa774136556ebc3b590d3aea717c2651 2059be0e6d199f8a32bd20d4a25bbbe67add567daf6e562d4f7f65ecdca02ecb c4fa73f84baac8efe9498a6678f09065171ec0ee25d26c909cff376965a4cab6 fdf87ebc8784d2819f3ad07885bdbb04231879c111b7bc63a8f106dd24add9bb: (20.468100508s)
	W0827 23:18:37.424709   59072 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c 4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7 2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8 79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815 2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f 1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225 c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb ff6fba7d4cd567c87fbf4331c6d0410ff65f9c87af20b9d26bc62bd65968fbca 4c7a5b5e55b4cadc68409de4b4cd00a9aa774136556ebc3b590d3aea717c2651 2059be0e6d199f8a32bd20d4a25bbbe67add567daf6e562d4f7f65ecdca02ecb c4fa73f84baac8efe9498a6678f09065171ec0ee25d26c909cff376965a4cab6 fdf87ebc8784d2819f3ad07885bdbb04231879c111b7bc63a8f106dd24add9bb: Proce
ss exited with status 1
	stdout:
	855373cb689bb7077955320e0c5063d4207e5f89d05868b5f54f2a329519396c
	4da936193f68df04b144e861a00bb3ae7978889d3d7274d2531c10c3cdb640e7
	2df68b456ed1c8a13aa21ac2c4a469a581d84a66864cd2df7d2e3a18a9c36eef
	a38df2ee2a969fd0e665a8230a36db8eeef73b0e71c4a655429a0f49640dddf8
	79f84561c7e7c541e06433df1c7b55628a159ac86f46d584e972e34eba59f815
	2b9b4ed391804b65af43ada5bb96f76efec662b56810e478438fb5a9c5812d5f
	1b964a10c7ef1da85185a4c6dc6baadf0f7fd6db450a33b1e0762954d873e225
	
	stderr:
	E0827 23:18:37.406258    2982 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": container with ID starting with c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb not found: ID does not exist" containerID="c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb"
	time="2024-08-27T23:18:37Z" level=fatal msg="stopping the container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": rpc error: code = NotFound desc = could not find container \"c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb\": container with ID starting with c636f2e3b63d61468759cd20508befef118f53ab9b0ae2ccb2319d265a7d68cb not found: ID does not exist"
	I0827 23:18:37.424763   59072 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0827 23:18:37.460670   59072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:18:37.471148   59072 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 27 23:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 27 23:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 27 23:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug 27 23:14 /etc/kubernetes/scheduler.conf
	
	I0827 23:18:37.471193   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:18:37.480301   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:18:37.488878   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:18:37.497372   59072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:18:37.497442   59072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:18:37.506250   59072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:18:37.515416   59072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:18:37.515459   59072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:18:37.524167   59072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 23:18:37.533584   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:37.590690   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.477098   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.709843   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.773173   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:38.848796   59072 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:18:38.848868   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.349674   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.849020   59072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:39.865060   59072 api_server.go:72] duration metric: took 1.01626956s to wait for apiserver process to appear ...
	I0827 23:18:39.865075   59072 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:18:39.865093   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:39.029344   58472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:18:39.029339   58472 addons.go:510] duration metric: took 2.444022ms for enable addons: enabled=[]
	I0827 23:18:39.227365   58472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:18:39.242944   58472 node_ready.go:35] waiting up to 6m0s for node "pause-677405" to be "Ready" ...
	I0827 23:18:39.245880   58472 node_ready.go:49] node "pause-677405" has status "Ready":"True"
	I0827 23:18:39.245909   58472 node_ready.go:38] duration metric: took 2.925005ms for node "pause-677405" to be "Ready" ...
	I0827 23:18:39.245921   58472 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:39.251190   58472 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.392602   58472 pod_ready.go:93] pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.392650   58472 pod_ready.go:82] duration metric: took 141.427119ms for pod "coredns-6f6b679f8f-6fhl7" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.392666   58472 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.792372   58472 pod_ready.go:93] pod "etcd-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:39.792396   58472 pod_ready.go:82] duration metric: took 399.72282ms for pod "etcd-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:39.792411   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.192860   58472 pod_ready.go:93] pod "kube-apiserver-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.192887   58472 pod_ready.go:82] duration metric: took 400.469277ms for pod "kube-apiserver-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.192901   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.592927   58472 pod_ready.go:93] pod "kube-controller-manager-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.592950   58472 pod_ready.go:82] duration metric: took 400.041717ms for pod "kube-controller-manager-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.592960   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.992171   58472 pod_ready.go:93] pod "kube-proxy-8zvr2" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:40.992197   58472 pod_ready.go:82] duration metric: took 399.231287ms for pod "kube-proxy-8zvr2" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:40.992207   58472 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:41.392610   58472 pod_ready.go:93] pod "kube-scheduler-pause-677405" in "kube-system" namespace has status "Ready":"True"
	I0827 23:18:41.392644   58472 pod_ready.go:82] duration metric: took 400.428696ms for pod "kube-scheduler-pause-677405" in "kube-system" namespace to be "Ready" ...
	I0827 23:18:41.392666   58472 pod_ready.go:39] duration metric: took 2.146722658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:18:41.392684   58472 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:18:41.392759   58472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:18:41.406493   58472 api_server.go:72] duration metric: took 2.379622546s to wait for apiserver process to appear ...
	I0827 23:18:41.406521   58472 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:18:41.406543   58472 api_server.go:253] Checking apiserver healthz at https://192.168.61.236:8443/healthz ...
	I0827 23:18:41.411786   58472 api_server.go:279] https://192.168.61.236:8443/healthz returned 200:
	ok
	I0827 23:18:41.412762   58472 api_server.go:141] control plane version: v1.31.0
	I0827 23:18:41.412781   58472 api_server.go:131] duration metric: took 6.253513ms to wait for apiserver health ...
	I0827 23:18:41.412788   58472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:18:41.594425   58472 system_pods.go:59] 6 kube-system pods found
	I0827 23:18:41.594453   58472 system_pods.go:61] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running
	I0827 23:18:41.594458   58472 system_pods.go:61] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running
	I0827 23:18:41.594461   58472 system_pods.go:61] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running
	I0827 23:18:41.594465   58472 system_pods.go:61] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running
	I0827 23:18:41.594469   58472 system_pods.go:61] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running
	I0827 23:18:41.594472   58472 system_pods.go:61] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running
	I0827 23:18:41.594477   58472 system_pods.go:74] duration metric: took 181.685169ms to wait for pod list to return data ...
	I0827 23:18:41.594483   58472 default_sa.go:34] waiting for default service account to be created ...
	I0827 23:18:41.791715   58472 default_sa.go:45] found service account: "default"
	I0827 23:18:41.791740   58472 default_sa.go:55] duration metric: took 197.251256ms for default service account to be created ...
	I0827 23:18:41.791749   58472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 23:18:41.993950   58472 system_pods.go:86] 6 kube-system pods found
	I0827 23:18:41.993983   58472 system_pods.go:89] "coredns-6f6b679f8f-6fhl7" [ae14bf6f-1cab-4d5c-99c7-ad71a9e05199] Running
	I0827 23:18:41.993991   58472 system_pods.go:89] "etcd-pause-677405" [52544bb6-25ad-4036-871a-71f31389374a] Running
	I0827 23:18:41.993997   58472 system_pods.go:89] "kube-apiserver-pause-677405" [5a7bdf53-23eb-4d7a-96b1-a613436f2d86] Running
	I0827 23:18:41.994002   58472 system_pods.go:89] "kube-controller-manager-pause-677405" [61a51076-0272-401e-bc93-e4f99369005f] Running
	I0827 23:18:41.994012   58472 system_pods.go:89] "kube-proxy-8zvr2" [52158293-b0ab-4cd9-b8a5-457017d195e3] Running
	I0827 23:18:41.994020   58472 system_pods.go:89] "kube-scheduler-pause-677405" [68ea5325-f6b2-483b-a726-b769b592507f] Running
	I0827 23:18:41.994028   58472 system_pods.go:126] duration metric: took 202.27467ms to wait for k8s-apps to be running ...
	I0827 23:18:41.994037   58472 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 23:18:41.994088   58472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:18:42.015521   58472 system_svc.go:56] duration metric: took 21.473265ms WaitForService to wait for kubelet
	I0827 23:18:42.015577   58472 kubeadm.go:582] duration metric: took 2.988711578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:18:42.015603   58472 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:18:42.192573   58472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:18:42.192596   58472 node_conditions.go:123] node cpu capacity is 2
	I0827 23:18:42.192615   58472 node_conditions.go:105] duration metric: took 177.000322ms to run NodePressure ...
	I0827 23:18:42.192626   58472 start.go:241] waiting for startup goroutines ...
	I0827 23:18:42.192636   58472 start.go:246] waiting for cluster config update ...
	I0827 23:18:42.192646   58472 start.go:255] writing updated cluster config ...
	I0827 23:18:42.192937   58472 ssh_runner.go:195] Run: rm -f paused
	I0827 23:18:42.240990   58472 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 23:18:42.243030   58472 out.go:177] * Done! kubectl is now configured to use "pause-677405" cluster and "default" namespace by default
	I0827 23:18:41.894931   59072 api_server.go:279] https://192.168.50.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0827 23:18:41.894947   59072 api_server.go:103] status: https://192.168.50.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0827 23:18:41.894957   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:42.031630   59072 api_server.go:279] https://192.168.50.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0827 23:18:42.031656   59072 api_server.go:103] status: https://192.168.50.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0827 23:18:42.366088   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:42.370809   59072 api_server.go:279] https://192.168.50.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0827 23:18:42.370823   59072 api_server.go:103] status: https://192.168.50.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0827 23:18:42.865198   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:42.872045   59072 api_server.go:279] https://192.168.50.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0827 23:18:42.872074   59072 api_server.go:103] status: https://192.168.50.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0827 23:18:43.365580   59072 api_server.go:253] Checking apiserver healthz at https://192.168.50.243:8443/healthz ...
	I0827 23:18:43.374650   59072 api_server.go:279] https://192.168.50.243:8443/healthz returned 200:
	ok
	I0827 23:18:43.381132   59072 api_server.go:141] control plane version: v1.31.0
	I0827 23:18:43.381146   59072 api_server.go:131] duration metric: took 3.516067306s to wait for apiserver health ...
	I0827 23:18:43.381153   59072 cni.go:84] Creating CNI manager for ""
	I0827 23:18:43.381158   59072 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 23:18:43.383007   59072 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 23:18:43.384204   59072 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 23:18:43.395942   59072 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 23:18:43.413501   59072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:18:43.430246   59072 system_pods.go:59] 7 kube-system pods found
	I0827 23:18:43.430262   59072 system_pods.go:61] "coredns-6f6b679f8f-djd5w" [a657fddb-8fd9-476d-92c2-194571d09900] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0827 23:18:43.430268   59072 system_pods.go:61] "etcd-cert-expiration-649861" [902b14bf-bc30-4843-a3bf-b071802afd4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0827 23:18:43.430277   59072 system_pods.go:61] "kube-apiserver-cert-expiration-649861" [889af966-5276-445d-8266-30b405bf3a89] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0827 23:18:43.430283   59072 system_pods.go:61] "kube-controller-manager-cert-expiration-649861" [17c98c0e-713c-4b39-a4b8-ed99df58d73a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0827 23:18:43.430289   59072 system_pods.go:61] "kube-proxy-5t56t" [9abc6a76-2ce4-4918-8d3d-9d2f4f911a1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0827 23:18:43.430296   59072 system_pods.go:61] "kube-scheduler-cert-expiration-649861" [9fe87c3b-97de-4e4a-8dea-91f5273709e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0827 23:18:43.430305   59072 system_pods.go:61] "storage-provisioner" [666d0251-6a60-4b98-ae0e-cf156a24210f] Running
	I0827 23:18:43.430313   59072 system_pods.go:74] duration metric: took 16.799658ms to wait for pod list to return data ...
	I0827 23:18:43.430319   59072 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:18:43.435420   59072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0827 23:18:43.435462   59072 node_conditions.go:123] node cpu capacity is 2
	I0827 23:18:43.435474   59072 node_conditions.go:105] duration metric: took 5.15032ms to run NodePressure ...
	I0827 23:18:43.435491   59072 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0827 23:18:43.713387   59072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 23:18:43.724580   59072 ops.go:34] apiserver oom_adj: -16
	I0827 23:18:43.724592   59072 kubeadm.go:597] duration metric: took 26.861108398s to restartPrimaryControlPlane
	I0827 23:18:43.724599   59072 kubeadm.go:394] duration metric: took 27.02062714s to StartCluster
	I0827 23:18:43.724614   59072 settings.go:142] acquiring lock: {Name:mk0d4446b23fe2b483973b06899b58d39998de18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:43.724686   59072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 23:18:43.725496   59072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-7571/kubeconfig: {Name:mkd248d07b87157d2742c7db47b55d4d3311f41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:18:43.725711   59072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0827 23:18:43.725777   59072 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:18:43.725844   59072 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-649861"
	I0827 23:18:43.725871   59072 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-649861"
	W0827 23:18:43.725877   59072 addons.go:243] addon storage-provisioner should already be in state true
	I0827 23:18:43.725881   59072 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-649861"
	I0827 23:18:43.725905   59072 host.go:66] Checking if "cert-expiration-649861" exists ...
	I0827 23:18:43.725932   59072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-649861"
	I0827 23:18:43.725941   59072 config.go:182] Loaded profile config "cert-expiration-649861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 23:18:43.726237   59072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:43.726261   59072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:43.726342   59072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:43.726384   59072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:43.727261   59072 out.go:177] * Verifying Kubernetes components...
	I0827 23:18:43.728500   59072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:18:43.743711   59072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0827 23:18:43.744334   59072 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:18:43.744971   59072 main.go:141] libmachine: Using API Version  1
	I0827 23:18:43.744987   59072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:18:43.745416   59072 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:18:43.745579   59072 main.go:141] libmachine: (cert-expiration-649861) Calling .GetState
	I0827 23:18:43.745939   59072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37075
	I0827 23:18:43.746290   59072 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:18:43.746920   59072 main.go:141] libmachine: Using API Version  1
	I0827 23:18:43.746928   59072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:18:43.747242   59072 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:18:43.747751   59072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:43.747768   59072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:43.748772   59072 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-649861"
	W0827 23:18:43.748781   59072 addons.go:243] addon default-storageclass should already be in state true
	I0827 23:18:43.748806   59072 host.go:66] Checking if "cert-expiration-649861" exists ...
	I0827 23:18:43.749136   59072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 23:18:43.749154   59072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 23:18:43.764141   59072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0827 23:18:43.764660   59072 main.go:141] libmachine: () Calling .GetVersion
	I0827 23:18:43.765176   59072 main.go:141] libmachine: Using API Version  1
	I0827 23:18:43.765187   59072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 23:18:43.765490   59072 main.go:141] libmachine: () Calling .GetMachineName
	I0827 23:18:43.765629   59072 main.go:141] libmachine: (cert-expiration-649861) Calling .GetState
	I0827 23:18:43.767381   59072 main.go:141] libmachine: (cert-expiration-649861) Calling .DriverName
	I0827 23:18:43.768873   59072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46277
	I0827 23:18:43.769075   59072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.838435456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800724838402450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7110eaf6-1302-4061-b6b5-c09743542911 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.839135726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3e88d16-24e7-4ff1-9cc7-35154b37c6f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.839274391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3e88d16-24e7-4ff1-9cc7-35154b37c6f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.839563094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3e88d16-24e7-4ff1-9cc7-35154b37c6f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.885260439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ca331d9-3b64-40f2-9afd-ef2a771e89c2 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.885437206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ca331d9-3b64-40f2-9afd-ef2a771e89c2 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.887113017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50d83b66-6108-45bb-a494-12c7565036fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.887540711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800724887515908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50d83b66-6108-45bb-a494-12c7565036fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.888144462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a957c36-9030-467b-8fe7-a9abb81ce560 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.888246801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a957c36-9030-467b-8fe7-a9abb81ce560 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.888504898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a957c36-9030-467b-8fe7-a9abb81ce560 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.932563414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a0e8cfe-088e-4a78-9f9f-7dee82363255 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.932652723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a0e8cfe-088e-4a78-9f9f-7dee82363255 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.933680784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c73584a1-0028-4cd8-85f0-535b0ec52161 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.934322023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800724934295536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c73584a1-0028-4cd8-85f0-535b0ec52161 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.934756152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0a0850e-659c-47d3-8a99-c9a9470f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.934815288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0a0850e-659c-47d3-8a99-c9a9470f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.935882355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0a0850e-659c-47d3-8a99-c9a9470f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.983164698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b3d1c05-35ae-4475-b9c5-e453742115e7 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.983292235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b3d1c05-35ae-4475-b9c5-e453742115e7 name=/runtime.v1.RuntimeService/Version
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.984914867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e42fbd55-31a1-4f17-876f-f02ba533b20d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.985385088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800724985356406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e42fbd55-31a1-4f17-876f-f02ba533b20d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.986001247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234c0f22-b2c4-4e7d-bd8e-2ebafa62f3a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.986062272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234c0f22-b2c4-4e7d-bd8e-2ebafa62f3a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 27 23:18:44 pause-677405 crio[2066]: time="2024-08-27 23:18:44.986361985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724800702788424824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724800702800789878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724800698947744773,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6
a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724800698985628920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19
d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724800698966910882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4d
e7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724800698935641060,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928,PodSandboxId:60715d8c5bb35f861ba89c87da9990c61eecf62ab8b6da24bf52aa5734173a13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724800675201768727,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8zvr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52158293-b0ab-4cd9-b8a5-457017d195e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579,PodSandboxId:044b286455d0bdb53535a3a3638b5ce06bfcfefcf3ed301552ddd85ba9424916,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724800675830941038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6fhl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae14bf6f-1cab-4d5c-99c7-ad71a9e05199,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7,PodSandboxId:7232e1072e5f278dfa5f04d9a0c059b066f5896967b15f2094d00a313aed24f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724800675128462523,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05c3bcd81d6a8d2b8759c8ff722c6e75,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39,PodSandboxId:fe4bf70c77a372074ccb83fef9d3099cbc2f6998ad5ce48d83b1a17cb10ab22f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724800675104089673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-677405,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e0d37d1f1a6743ee4de7974952171d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a,PodSandboxId:a2fe03178a41e61b6d8e4c58f3a4602cf4eb8956b5d3f31c48578f32fd610131,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724800674983739228,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: e49c41e7885642f914a0beb7cee5fe67,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa,PodSandboxId:9046aef3b60ebe207cb66f4bf2c8f7b9d50582f9cc4278651eea1495a1226203,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724800674926909413,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-677405,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 19d1cfa779d74f8f2a4ab411d53354e5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234c0f22-b2c4-4e7d-bd8e-2ebafa62f3a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	276b063d60628       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   22 seconds ago      Running             kube-proxy                2                   60715d8c5bb35       kube-proxy-8zvr2
	5d094c16ab559       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Running             coredns                   2                   044b286455d0b       coredns-6f6b679f8f-6fhl7
	5bb8148d18081       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   26 seconds ago      Running             kube-controller-manager   2                   9046aef3b60eb       kube-controller-manager-pause-677405
	a0715e76c2155       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   26 seconds ago      Running             kube-apiserver            2                   fe4bf70c77a37       kube-apiserver-pause-677405
	0d68ff0be7c75       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   26 seconds ago      Running             kube-scheduler            2                   7232e1072e5f2       kube-scheduler-pause-677405
	b224c623ae1c5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Running             etcd                      2                   a2fe03178a41e       etcd-pause-677405
	1c454a7d892f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago      Exited              coredns                   1                   044b286455d0b       coredns-6f6b679f8f-6fhl7
	a5b313ec95b7c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   49 seconds ago      Exited              kube-proxy                1                   60715d8c5bb35       kube-proxy-8zvr2
	f1ab81f15f39f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   49 seconds ago      Exited              kube-scheduler            1                   7232e1072e5f2       kube-scheduler-pause-677405
	cb90515bc2df7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   49 seconds ago      Exited              kube-apiserver            1                   fe4bf70c77a37       kube-apiserver-pause-677405
	8d5de0c2b7576       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   50 seconds ago      Exited              etcd                      1                   a2fe03178a41e       etcd-pause-677405
	bb5381154219f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   50 seconds ago      Exited              kube-controller-manager   1                   9046aef3b60eb       kube-controller-manager-pause-677405
	
	
	==> coredns [1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49136 - 22056 "HINFO IN 3105619649916461629.8965712088103655971. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019234492s
	
	
	==> coredns [5d094c16ab559025d4c1e875191807e2dc1da0acbcac3332457cb1302bff3283] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42669 - 33973 "HINFO IN 6595555225940235633.2868214675891026286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008226435s
	
	
	==> describe nodes <==
	Name:               pause-677405
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-677405
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=pause-677405
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T23_17_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-677405
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:18:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:18:22 +0000   Tue, 27 Aug 2024 23:17:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.236
	  Hostname:    pause-677405
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5eb4d9e5deb045c5a3cc7608567a9add
	  System UUID:                5eb4d9e5-deb0-45c5-a3cc-7608567a9add
	  Boot ID:                    16244681-2964-4854-987c-3affeaff866d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6fhl7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 etcd-pause-677405                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         81s
	  kube-system                 kube-apiserver-pause-677405             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-677405    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-8zvr2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-677405             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeReady                81s                kubelet          Node pause-677405 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-677405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-677405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-677405 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           77s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	  Normal  RegisteredNode           43s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-677405 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-677405 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-677405 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-677405 event: Registered Node pause-677405 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.294532] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.060758] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058025] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.165312] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.151809] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.253806] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.759209] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.258294] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.071521] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999636] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.097817] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.825254] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.834844] kauditd_printk_skb: 46 callbacks suppressed
	[ +22.720103] systemd-fstab-generator[1989]: Ignoring "noauto" option for root device
	[  +0.072663] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.052504] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
	[  +0.173926] systemd-fstab-generator[2016]: Ignoring "noauto" option for root device
	[  +0.131902] systemd-fstab-generator[2028]: Ignoring "noauto" option for root device
	[  +0.270021] systemd-fstab-generator[2056]: Ignoring "noauto" option for root device
	[  +2.848127] systemd-fstab-generator[2610]: Ignoring "noauto" option for root device
	[  +3.382510] kauditd_printk_skb: 195 callbacks suppressed
	[Aug27 23:18] systemd-fstab-generator[3038]: Ignoring "noauto" option for root device
	[  +5.167489] kauditd_printk_skb: 53 callbacks suppressed
	[ +15.733904] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	
	
	==> etcd [8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a] <==
	{"level":"info","ts":"2024-08-27T23:17:57.266973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-27T23:17:57.267003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgPreVoteResp from 282aa318c6d47fc7 at term 2"}
	{"level":"info","ts":"2024-08-27T23:17:57.267027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgVoteResp from 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became leader at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.267050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 282aa318c6d47fc7 elected leader 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:17:57.268271Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"282aa318c6d47fc7","local-member-attributes":"{Name:pause-677405 ClientURLs:[https://192.168.61.236:2379]}","request-path":"/0/members/282aa318c6d47fc7/attributes","cluster-id":"9c3196b6b50a570e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:17:57.268370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:17:57.268444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:17:57.268829Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:17:57.268877Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T23:17:57.269777Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:17:57.270729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.236:2379"}
	{"level":"info","ts":"2024-08-27T23:17:57.271893Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:17:57.272981Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:18:16.594033Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-27T23:18:16.594125Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-677405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"]}
	{"level":"warn","ts":"2024-08-27T23:18:16.594254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.594292Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.596048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.236:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-27T23:18:16.596118Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.236:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-27T23:18:16.596179Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"282aa318c6d47fc7","current-leader-member-id":"282aa318c6d47fc7"}
	{"level":"info","ts":"2024-08-27T23:18:16.600424Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:16.600644Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:16.600674Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-677405","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"]}
	
	
	==> etcd [b224c623ae1c5f965476f2bfd7826089497f7e88843299e32b441fe6ec421c43] <==
	{"level":"info","ts":"2024-08-27T23:18:19.364623Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c3196b6b50a570e","local-member-id":"282aa318c6d47fc7","added-peer-id":"282aa318c6d47fc7","added-peer-peer-urls":["https://192.168.61.236:2380"]}
	{"level":"info","ts":"2024-08-27T23:18:19.364761Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c3196b6b50a570e","local-member-id":"282aa318c6d47fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:18:19.364819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:18:19.366592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:19.383742Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-27T23:18:19.383909Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:19.384045Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.236:2380"}
	{"level":"info","ts":"2024-08-27T23:18:19.389528Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"282aa318c6d47fc7","initial-advertise-peer-urls":["https://192.168.61.236:2380"],"listen-peer-urls":["https://192.168.61.236:2380"],"advertise-client-urls":["https://192.168.61.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T23:18:19.392220Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:18:20.322274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgPreVoteResp from 282aa318c6d47fc7 at term 3"}
	{"level":"info","ts":"2024-08-27T23:18:20.322465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 received MsgVoteResp from 282aa318c6d47fc7 at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"282aa318c6d47fc7 became leader at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.322562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 282aa318c6d47fc7 elected leader 282aa318c6d47fc7 at term 4"}
	{"level":"info","ts":"2024-08-27T23:18:20.331558Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"282aa318c6d47fc7","local-member-attributes":"{Name:pause-677405 ClientURLs:[https://192.168.61.236:2379]}","request-path":"/0/members/282aa318c6d47fc7/attributes","cluster-id":"9c3196b6b50a570e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:18:20.331824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:18:20.332369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:18:20.336924Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:20.344064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T23:18:20.351839Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:18:20.357697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.236:2379"}
	{"level":"info","ts":"2024-08-27T23:18:20.358267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:18:20.358299Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:18:45 up 1 min,  0 users,  load average: 1.34, 0.53, 0.19
	Linux pause-677405 5.10.207 #1 SMP Mon Aug 26 22:06:37 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a0715e76c215507c44e23531499aea2664ecd04079e20ee05f4519e8c7f91c87] <==
	I0827 23:18:22.115297       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0827 23:18:22.115328       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0827 23:18:22.115433       1 shared_informer.go:320] Caches are synced for configmaps
	I0827 23:18:22.116418       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0827 23:18:22.125838       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0827 23:18:22.126337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0827 23:18:22.132734       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0827 23:18:22.147263       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0827 23:18:22.151392       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0827 23:18:22.151433       1 aggregator.go:171] initial CRD sync complete...
	I0827 23:18:22.151446       1 autoregister_controller.go:144] Starting autoregister controller
	I0827 23:18:22.151452       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0827 23:18:22.151456       1 cache.go:39] Caches are synced for autoregister controller
	I0827 23:18:22.184435       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0827 23:18:22.184527       1 policy_source.go:224] refreshing policies
	I0827 23:18:22.232987       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 23:18:23.025294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0827 23:18:23.343751       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.236]
	I0827 23:18:23.345889       1 controller.go:615] quota admission added evaluator for: endpoints
	I0827 23:18:23.352434       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 23:18:23.785616       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0827 23:18:23.811448       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0827 23:18:23.856909       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0827 23:18:23.899962       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 23:18:23.907769       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39] <==
	I0827 23:18:06.528382       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0827 23:18:06.528399       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0827 23:18:06.528429       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0827 23:18:06.528442       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0827 23:18:06.528452       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I0827 23:18:06.528466       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0827 23:18:06.528496       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0827 23:18:06.528564       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0827 23:18:06.528595       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0827 23:18:06.528624       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0827 23:18:06.528931       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0827 23:18:06.528988       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 23:18:06.529130       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0827 23:18:06.529291       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 23:18:06.529347       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0827 23:18:06.529583       1 controller.go:157] Shutting down quota evaluator
	I0827 23:18:06.529679       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.530879       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0827 23:18:06.531013       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0827 23:18:06.531296       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0827 23:18:06.531396       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0827 23:18:06.531496       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531615       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531659       1 controller.go:176] quota evaluator worker shutdown
	I0827 23:18:06.531664       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [5bb8148d180818571f00f15809d4eb61d91a9dfdd6854dd6db175cf2b8f7d3b5] <==
	I0827 23:18:25.454249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0827 23:18:25.454424       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0827 23:18:25.454519       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0827 23:18:25.455432       1 shared_informer.go:320] Caches are synced for service account
	I0827 23:18:25.455479       1 shared_informer.go:320] Caches are synced for deployment
	I0827 23:18:25.454220       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0827 23:18:25.458629       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0827 23:18:25.458753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.879µs"
	I0827 23:18:25.459419       1 shared_informer.go:320] Caches are synced for cronjob
	I0827 23:18:25.463648       1 shared_informer.go:320] Caches are synced for TTL
	I0827 23:18:25.550356       1 shared_informer.go:320] Caches are synced for expand
	I0827 23:18:25.551277       1 shared_informer.go:320] Caches are synced for persistent volume
	I0827 23:18:25.552654       1 shared_informer.go:320] Caches are synced for ephemeral
	I0827 23:18:25.557088       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:25.567233       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 23:18:25.601894       1 shared_informer.go:320] Caches are synced for stateful set
	I0827 23:18:25.601930       1 shared_informer.go:320] Caches are synced for PVC protection
	I0827 23:18:25.629684       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:25.644964       1 shared_informer.go:320] Caches are synced for HPA
	I0827 23:18:25.647400       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0827 23:18:26.082858       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:26.102020       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:26.102173       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 23:18:26.898999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="27.049421ms"
	I0827 23:18:26.899274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="172.994µs"
	
	
	==> kube-controller-manager [bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa] <==
	I0827 23:18:02.123385       1 shared_informer.go:320] Caches are synced for TTL
	I0827 23:18:02.137031       1 shared_informer.go:320] Caches are synced for PVC protection
	I0827 23:18:02.138260       1 shared_informer.go:320] Caches are synced for HPA
	I0827 23:18:02.138312       1 shared_informer.go:320] Caches are synced for daemon sets
	I0827 23:18:02.139611       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0827 23:18:02.139857       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0827 23:18:02.139911       1 shared_informer.go:320] Caches are synced for endpoint
	I0827 23:18:02.139971       1 shared_informer.go:320] Caches are synced for persistent volume
	I0827 23:18:02.140109       1 shared_informer.go:320] Caches are synced for ephemeral
	I0827 23:18:02.140832       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0827 23:18:02.150249       1 shared_informer.go:320] Caches are synced for taint
	I0827 23:18:02.150392       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0827 23:18:02.150499       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-677405"
	I0827 23:18:02.150565       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0827 23:18:02.154452       1 shared_informer.go:320] Caches are synced for job
	I0827 23:18:02.157284       1 shared_informer.go:320] Caches are synced for deployment
	I0827 23:18:02.161274       1 shared_informer.go:320] Caches are synced for disruption
	I0827 23:18:02.189262       1 shared_informer.go:320] Caches are synced for stateful set
	I0827 23:18:02.189379       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0827 23:18:02.252875       1 shared_informer.go:320] Caches are synced for attach detach
	I0827 23:18:02.302505       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:02.317532       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 23:18:02.702049       1 shared_informer.go:320] Caches are synced for garbage collector
	I0827 23:18:02.702092       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0827 23:18:02.729829       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [276b063d60628ec5ac523ce7d52c6bbb1911c89fde4a128216cdaa89743b5565] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 23:18:23.002348       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 23:18:23.013613       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.236"]
	E0827 23:18:23.013964       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:18:23.077381       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 23:18:23.077423       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 23:18:23.077471       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:18:23.082845       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:18:23.083120       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:18:23.083148       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:18:23.095025       1 config.go:197] "Starting service config controller"
	I0827 23:18:23.095065       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:18:23.095085       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:18:23.095096       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:18:23.095592       1 config.go:326] "Starting node config controller"
	I0827 23:18:23.095620       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:18:23.195286       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 23:18:23.195369       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:18:23.197260       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0827 23:17:57.111292       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0827 23:17:58.898328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.236"]
	E0827 23:17:58.898439       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:17:59.008509       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0827 23:17:59.008609       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0827 23:17:59.008646       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:17:59.012952       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:17:59.013309       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:17:59.013472       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:17:59.017176       1 config.go:197] "Starting service config controller"
	I0827 23:17:59.017291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:17:59.017340       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:17:59.017357       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:17:59.017754       1 config.go:326] "Starting node config controller"
	I0827 23:17:59.017789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:17:59.117972       1 shared_informer.go:320] Caches are synced for node config
	I0827 23:17:59.118089       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:17:59.118102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0d68ff0be7c758d3eb1e04bc186e9195769b65b17f6dd3e7ff6d7d3f1252973a] <==
	I0827 23:18:20.741367       1 serving.go:386] Generated self-signed cert in-memory
	I0827 23:18:22.153552       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:18:22.153586       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:18:22.158447       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0827 23:18:22.158590       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0827 23:18:22.158696       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:18:22.158756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:18:22.158790       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0827 23:18:22.158813       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0827 23:18:22.159921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 23:18:22.160042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 23:18:22.258817       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0827 23:18:22.259336       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0827 23:18:22.259477       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7] <==
	I0827 23:17:57.044602       1 serving.go:386] Generated self-signed cert in-memory
	W0827 23:17:58.726776       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 23:17:58.726822       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 23:17:58.726836       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:17:58.726845       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:17:58.882132       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0827 23:17:58.882171       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:17:58.889359       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0827 23:17:58.889508       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:17:58.890027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0827 23:17:58.890138       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0827 23:17:58.990525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:18:06.462033       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0827 23:18:06.462552       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.882335    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: E0827 23:18:18.883305    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.236:8443: connect: connection refused" node="pause-677405"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.914972    3045 scope.go:117] "RemoveContainer" containerID="cb90515bc2df7dae2941730e227d1f98af58ca2f323dae33554c33471408bd39"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.915345    3045 scope.go:117] "RemoveContainer" containerID="8d5de0c2b7576d00052c747c34f0910786e029cc5765c65d5e2aeaeb02be5a6a"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.917167    3045 scope.go:117] "RemoveContainer" containerID="bb5381154219f30f4049b450a701aaf0a320fae327d7370cd352e5ffa0cd66aa"
	Aug 27 23:18:18 pause-677405 kubelet[3045]: I0827 23:18:18.919150    3045 scope.go:117] "RemoveContainer" containerID="f1ab81f15f39f73e234bf5ae4a9a21d6f5ad04a16af007bbe1e20928bd03f3c7"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: E0827 23:18:19.085935    3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-677405?timeout=10s\": dial tcp 192.168.61.236:8443: connect: connection refused" interval="800ms"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: I0827 23:18:19.285126    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:19 pause-677405 kubelet[3045]: E0827 23:18:19.286087    3045 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.236:8443: connect: connection refused" node="pause-677405"
	Aug 27 23:18:20 pause-677405 kubelet[3045]: I0827 23:18:20.088104    3045 kubelet_node_status.go:72] "Attempting to register node" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269458    3045 kubelet_node_status.go:111] "Node was previously registered" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269555    3045 kubelet_node_status.go:75] "Successfully registered node" node="pause-677405"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.269582    3045 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.270588    3045 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.459993    3045 apiserver.go:52] "Watching apiserver"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.484029    3045 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.525485    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52158293-b0ab-4cd9-b8a5-457017d195e3-xtables-lock\") pod \"kube-proxy-8zvr2\" (UID: \"52158293-b0ab-4cd9-b8a5-457017d195e3\") " pod="kube-system/kube-proxy-8zvr2"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.525622    3045 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52158293-b0ab-4cd9-b8a5-457017d195e3-lib-modules\") pod \"kube-proxy-8zvr2\" (UID: \"52158293-b0ab-4cd9-b8a5-457017d195e3\") " pod="kube-system/kube-proxy-8zvr2"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.764779    3045 scope.go:117] "RemoveContainer" containerID="1c454a7d892f6d08af3b2826c1637fa216ce7a92a135006e13d9c94c051ce579"
	Aug 27 23:18:22 pause-677405 kubelet[3045]: I0827 23:18:22.765005    3045 scope.go:117] "RemoveContainer" containerID="a5b313ec95b7cdf345a6766765102da0d6a628a7ca6394296246536e4db3e928"
	Aug 27 23:18:26 pause-677405 kubelet[3045]: I0827 23:18:26.853692    3045 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 27 23:18:28 pause-677405 kubelet[3045]: E0827 23:18:28.583375    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800708582872764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:28 pause-677405 kubelet[3045]: E0827 23:18:28.583452    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800708582872764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:38 pause-677405 kubelet[3045]: E0827 23:18:38.585768    3045 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800718585103558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 27 23:18:38 pause-677405 kubelet[3045]: E0827 23:18:38.585815    3045 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724800718585103558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677405 -n pause-677405
helpers_test.go:261: (dbg) Run:  kubectl --context pause-677405 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.055s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.15:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.15:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (19m8s)
	TestStartStop (19m12s)
	TestStartStop/group/default-k8s-diff-port (18m36s)
	TestStartStop/group/default-k8s-diff-port/serial (18m36s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (4m18s)
	TestStartStop/group/embed-certs (14m22s)
	TestStartStop/group/embed-certs/serial (14m22s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (3m45s)
	TestStartStop/group/no-preload (18m37s)
	TestStartStop/group/no-preload/serial (18m37s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (3m14s)
	TestStartStop/group/old-k8s-version (18m58s)
	TestStartStop/group/old-k8s-version/serial (18m58s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25s)

                                                
                                                
goroutine 3427 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0007dab60, 0xc00096fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008a42d0, {0x4e6cd20, 0x2b, 0x2b}, {0x292a47a?, 0xc000537b00?, 0x4f2a740?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000910fa0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000910fa0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 22 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000592500)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1941 [chan receive, 19 minutes]:
testing.(*T).Run(0xc001288d00, {0x28d048c?, 0x0?}, 0xc000593400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001288d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001288d00, 0xc00199e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3302 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001aa86c0, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2383 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2406
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 37 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 36
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1687 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0000da1a0, {0x28ceedc?, 0x55127c?}, 0xc001c60f78)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0000da1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0000da1a0, 0x33b89f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 426 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0002310d0, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001284d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39671c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000231100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00122eb00, {0x3925e60, 0xc0008838c0}, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00122eb00, 0x3b9aca00, 0x0, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 407
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2203 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0008172d0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000807d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39671c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000817300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bea550, {0x3925e60, 0xc000575ad0}, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bea550, 0x3b9aca00, 0x0, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2347
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2104 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001c4c340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001c4c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001c4c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001c4c340, 0xc0000caf80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 407 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000231100, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 367
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2103 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001c4c1a0, 0xc001c60f78)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1687
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2658 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c390, 0xc0004bc700}, {0x393f680, 0xc00182dc20}, 0x1, 0x0, 0xc0012d5c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c390?, 0xc0004bc070?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c390, 0xc0004bc070}, 0xc0007db6c0, {0xc0018100e0, 0x1c}, {0x28f53bf, 0x14}, {0x290d15e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c390, 0xc0004bc070}, 0xc0007db6c0, {0xc0018100e0, 0x1c}, {0x28f8309?, 0xc00142c760?}, {0x551133?, 0x4a170f?}, {0xc0001b9400, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0007db6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0007db6c0, 0xc0000ca480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2356
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 193 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7fa900c58ea0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000714100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000714100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0007d6bc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0007d6bc0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00024a5a0, {0x393ef90, 0xc0007d6bc0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00024a5a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc001288820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 190
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 800 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc001469b00)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 859
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1944 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0012891e0, {0x28d048c?, 0x0?}, 0xc00183c080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012891e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0012891e0, 0xc00199e240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1940 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001288b60, 0x33b8c20)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1708
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 406 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 367
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1708 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0000db6c0, {0x28ceedc?, 0x551133?}, 0x33b8c20)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0000db6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0000db6c0, 0x33b8a40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2562 [chan receive, 3 minutes]:
testing.(*T).Run(0xc001288000, {0x28fb176?, 0x60400000004?}, 0xc0000ca700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001288000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001288000, 0xc00183c100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1946
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 625 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001443800, 0xc001487440)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 354
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2190 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007da340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007da340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007da340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007da340, 0xc00183c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2384 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001aa8d00, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2406
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2327 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0007dad00, {0x28fb176?, 0x60400000004?}, 0xc001454200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007dad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0007dad00, 0xc00183c080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1944
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 427 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c550, 0xc000060ba0}, 0xc00090a750, 0xc0012e1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c550, 0xc000060ba0}, 0x80?, 0xc00090a750, 0xc00090a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c550?, 0xc000060ba0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00090a7d0?, 0x592e44?, 0xc000990480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 407
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3300 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c390, 0xc00017a5b0}, {0x393f680, 0xc0017a1480}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c390?, 0xc000175960?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c390, 0xc000175960}, 0xc0007dbd40, {0xc001481740, 0x16}, {0x28f53bf, 0x14}, {0x290d15e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c390, 0xc000175960}, 0xc0007dbd40, {0xc001481740, 0x16}, {0x28e64d6?, 0xc000503f60?}, {0x551133?, 0x4a170f?}, {0xc0014a7080, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0007dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0007dbd40, 0xc001732000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2304
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2205 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2204
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2346 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 428 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 427
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1946 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0012896c0, {0x28d048c?, 0x0?}, 0xc00183c100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0012896c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0012896c0, 0xc00199e300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2143 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00180e820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00180e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00180e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00180e820, 0xc0000ca000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 599 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001392480, 0xc001470900)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 598
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2750 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c390, 0xc0004e02a0}, {0x393f680, 0xc0015fa340}, 0x1, 0x0, 0xc0012d9c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c390?, 0xc000175730?}, 0x3b9aca00, 0xc000963e10?, 0x1, 0xc000963c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c390, 0xc000175730}, 0xc0007dbba0, {0xc0012da0f0, 0x11}, {0x28f53bf, 0x14}, {0x290d15e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c390, 0xc000175730}, 0xc0007dbba0, {0xc0012da0f0, 0x11}, {0x28da17c?, 0xc00124d760?}, {0x551133?, 0x4a170f?}, {0xc00081dc00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0007dbba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0007dbba0, 0xc001454200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2327
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 799 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc001469b00)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 859
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2388 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c550, 0xc000060ba0}, 0xc001842f50, 0xc001842f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c550, 0xc000060ba0}, 0xa0?, 0xc001842f50, 0xc001842f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c550?, 0xc000060ba0?}, 0x9c7016?, 0xc001664480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc001546000?, 0xc0000608a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 714 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001547e00, 0xc00010ea80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 713
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2142 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00180e680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00180e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00180e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00180e680, 0xc000997b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2356 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001c4c4e0, {0x28fb176?, 0x60400000004?}, 0xc0000ca480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c4c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c4c4e0, 0xc000714200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1943
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2191 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0007db380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007db380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007db380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0007db380, 0xc00183c480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2141 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00180e340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00180e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00180e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00180e340, 0xc000997a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2140 [chan receive, 19 minutes]:
testing.(*testContext).waitParallel(0xc0008c2870)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00180e1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00180e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00180e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00180e1a0, 0xc000997580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2103
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1943 [chan receive, 18 minutes]:
testing.(*T).Run(0xc001289040, {0x28d048c?, 0x0?}, 0xc000714200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001289040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001289040, 0xc00199e200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2304 [chan receive]:
testing.(*T).Run(0xc001c4c000, {0x28fb176?, 0x60400000004?}, 0xc001732000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c4c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c4c000, 0xc000593400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1941
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2204 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c550, 0xc000060ba0}, 0xc001249750, 0xc0012e7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c550, 0xc000060ba0}, 0x0?, 0xc001249750, 0xc001249798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c550?, 0xc000060ba0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa502c5?, 0xc0017bf500?, 0x3942c00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2347
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2347 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000817300, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2766 [IO wait]:
internal/poll.runtime_pollWait(0x7fa900c58018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001454f00?, 0xc0015c7800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001454f00, {0xc0015c7800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001454f00, {0xc0015c7800?, 0xc00149c8c0?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001be24a8, {0xc0015c7800?, 0xc0015c785f?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc00097da28, {0xc0015c7800?, 0x0?, 0xc00097da28?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0018822b0, {0x3926600, 0xc00097da28})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001882008, {0x7fa8f075af58, 0xc0014ad308}, 0xc001287980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001882008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001882008, {0xc0014cd000, 0x1000, 0xc00162ea80?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0018778c0, {0xc0017f50e0, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3924aa0, 0xc0018778c0}, {0xc0017f50e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0017f50e0, 0x9, 0x1287dc0?}, {0x3924aa0?, 0xc0018778c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0017f50a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001287fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00149a900)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2765
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2389 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2388
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2657 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394c390, 0xc0000218f0}, {0x393f680, 0xc00182cc20}, 0x1, 0x0, 0xc001415c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394c390?, 0xc000022230?}, 0x3b9aca00, 0xc000967e10?, 0x1, 0xc000967c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394c390, 0xc000022230}, 0xc001c4cd00, {0xc0012da348, 0x12}, {0x28f53bf, 0x14}, {0x290d15e, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394c390, 0xc000022230}, 0xc001c4cd00, {0xc0012da348, 0x12}, {0x28dc3c6?, 0xc000509f60?}, {0x551133?, 0x4a170f?}, {0xc000912700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c4cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c4cd00, 0xc0000ca700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2562
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2696 [IO wait]:
internal/poll.runtime_pollWait(0x7fa900c58da8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000cb880?, 0xc0008bb800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000cb880, {0xc0008bb800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0000cb880, {0xc0008bb800?, 0x7fa8f074c618?, 0xc0016e25d0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0007e2320, {0xc0008bb800?, 0xc001275938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0016e25d0, {0xc0008bb800?, 0x0?, 0xc0016e25d0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00181ed30, {0x3926600, 0xc0016e25d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00181ea88, {0x39259a0, 0xc0007e2320}, 0xc001275980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00181ea88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00181ea88, {0xc001374000, 0x1000, 0xc00162ea80?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0013542a0, {0xc00161e580, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3924aa0, 0xc0013542a0}, {0xc00161e580, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00161e580, 0x9, 0x1275dc0?}, {0x3924aa0?, 0xc0013542a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00161e540)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001275fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001759500)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2695
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2387 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc001aa8cd0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000807d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39671c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001aa8d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a88210, {0x3925e60, 0xc001e14ba0}, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a88210, 0x3b9aca00, 0x0, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2384
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2639 [IO wait]:
internal/poll.runtime_pollWait(0x7fa900c58300, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000cb800?, 0xc00155d000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000cb800, {0xc00155d000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0000cb800, {0xc00155d000?, 0xc00047e500?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0007e20d0, {0xc00155d000?, 0xc00155d062?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0016e2690, {0xc00155d000?, 0x0?, 0xc0016e2690?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00181e9b0, {0x3926600, 0xc0016e2690})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00181e708, {0x7fa8f075af58, 0xc0006e0c90}, 0xc001281980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00181e708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00181e708, {0xc0016fa000, 0x1000, 0xc0013e41c0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0012fb860, {0xc00161e2e0, 0x9, 0x4e27c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3924aa0, 0xc0012fb860}, {0xc00161e2e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00161e2e0, 0x9, 0x1281dc0?}, {0x3924aa0?, 0xc0012fb860?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00161e2a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001281fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001758180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2638
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.28.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3161 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001aa8690, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001285d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x39671c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001aa86c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00012a1d0, {0x3925e60, 0xc0016b20c0}, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00012a1d0, 0x3b9aca00, 0x0, 0x1, 0xc000060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3162 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394c550, 0xc000060ba0}, 0xc001431750, 0xc001431798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394c550, 0xc000060ba0}, 0x40?, 0xc001431750, 0xc001431798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394c550?, 0xc000060ba0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0014a6a80?, 0xc001c18e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3163 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3162
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3301 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3942c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                    

Test pass (164/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.56
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 12.45
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 117.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
28 TestCertOptions 79.76
29 TestCertExpiration 279.04
31 TestForceSystemdFlag 101.86
32 TestForceSystemdEnv 45.44
34 TestKVMDriverInstallOrUpdate 4.01
38 TestErrorSpam/setup 43.94
39 TestErrorSpam/start 0.33
40 TestErrorSpam/status 0.74
41 TestErrorSpam/pause 1.49
42 TestErrorSpam/unpause 1.7
43 TestErrorSpam/stop 4.77
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 52.22
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 36.3
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.09
54 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
55 TestFunctional/serial/CacheCmd/cache/add_local 2.08
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.1
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 37.15
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.35
66 TestFunctional/serial/LogsFileCmd 1.39
67 TestFunctional/serial/InvalidService 3.94
69 TestFunctional/parallel/ConfigCmd 0.3
70 TestFunctional/parallel/DashboardCmd 16.63
71 TestFunctional/parallel/DryRun 0.26
72 TestFunctional/parallel/InternationalLanguage 0.13
73 TestFunctional/parallel/StatusCmd 0.82
77 TestFunctional/parallel/ServiceCmdConnect 21.64
78 TestFunctional/parallel/AddonsCmd 0.11
79 TestFunctional/parallel/PersistentVolumeClaim 42.43
81 TestFunctional/parallel/SSHCmd 0.44
82 TestFunctional/parallel/CpCmd 1.34
83 TestFunctional/parallel/MySQL 20.94
84 TestFunctional/parallel/FileSync 0.22
85 TestFunctional/parallel/CertSync 1.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
93 TestFunctional/parallel/License 0.58
94 TestFunctional/parallel/Version/short 0.05
95 TestFunctional/parallel/Version/components 0.74
96 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
97 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
99 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
100 TestFunctional/parallel/ProfileCmd/profile_list 0.33
101 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
111 TestFunctional/parallel/ServiceCmd/DeployApp 22.17
112 TestFunctional/parallel/MountCmd/any-port 8.89
113 TestFunctional/parallel/ServiceCmd/List 0.43
114 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
115 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
116 TestFunctional/parallel/ServiceCmd/Format 0.31
117 TestFunctional/parallel/ServiceCmd/URL 0.39
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.28
123 TestFunctional/parallel/ImageCommands/Setup 1.71
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.08
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
126 TestFunctional/parallel/MountCmd/specific-port 1.52
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.93
128 TestFunctional/parallel/MountCmd/VerifyCleanup 0.76
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.69
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.85
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.82
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 195.88
140 TestMultiControlPlane/serial/DeployApp 6.01
141 TestMultiControlPlane/serial/PingHostFromPods 1.13
142 TestMultiControlPlane/serial/AddWorkerNode 56.38
143 TestMultiControlPlane/serial/NodeLabels 0.06
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
145 TestMultiControlPlane/serial/CopyFile 12.33
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
151 TestMultiControlPlane/serial/DeleteSecondaryNode 16.69
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
154 TestMultiControlPlane/serial/RestartCluster 340.91
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
156 TestMultiControlPlane/serial/AddSecondaryNode 78.02
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
161 TestJSONOutput/start/Command 75.04
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.66
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.58
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 6.59
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.19
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 87.42
193 TestMountStart/serial/StartWithMountFirst 24.49
194 TestMountStart/serial/VerifyMountFirst 0.37
195 TestMountStart/serial/StartWithMountSecond 24.25
196 TestMountStart/serial/VerifyMountSecond 0.37
197 TestMountStart/serial/DeleteFirst 0.69
198 TestMountStart/serial/VerifyMountPostDelete 0.37
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 23.84
201 TestMountStart/serial/VerifyMountPostStop 0.36
204 TestMultiNode/serial/FreshStart2Nodes 109.13
205 TestMultiNode/serial/DeployApp2Nodes 5.92
206 TestMultiNode/serial/PingHostFrom2Pods 0.78
207 TestMultiNode/serial/AddNode 55.25
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.21
210 TestMultiNode/serial/CopyFile 6.9
211 TestMultiNode/serial/StopNode 2.19
212 TestMultiNode/serial/StartAfterStop 37.82
214 TestMultiNode/serial/DeleteNode 2.15
216 TestMultiNode/serial/RestartMultiNode 190.35
217 TestMultiNode/serial/ValidateNameConflict 43.34
224 TestScheduledStopUnix 114.33
228 TestRunningBinaryUpgrade 241.76
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
234 TestNoKubernetes/serial/StartWithK8s 94.32
235 TestNoKubernetes/serial/StartWithStopK8s 43.6
236 TestNoKubernetes/serial/Start 45.79
237 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
238 TestNoKubernetes/serial/ProfileList 1.63
239 TestNoKubernetes/serial/Stop 1.28
240 TestNoKubernetes/serial/StartNoArgs 43.22
241 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
242 TestStoppedBinaryUpgrade/Setup 2.33
243 TestStoppedBinaryUpgrade/Upgrade 126.24
245 TestPause/serial/Start 87.02
247 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
x
+
TestDownloadOnly/v1.20.0/json-events (27.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611779 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611779 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.558909303s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611779
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611779: exit status 85 (55.50946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611779 | jenkins | v1.33.1 | 27 Aug 24 21:37 UTC |          |
	|         | -p download-only-611779        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 21:37:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 21:37:23.354109   14777 out.go:345] Setting OutFile to fd 1 ...
	I0827 21:37:23.354236   14777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:37:23.354247   14777 out.go:358] Setting ErrFile to fd 2...
	I0827 21:37:23.354253   14777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:37:23.354451   14777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	W0827 21:37:23.354589   14777 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19522-7571/.minikube/config/config.json: open /home/jenkins/minikube-integration/19522-7571/.minikube/config/config.json: no such file or directory
	I0827 21:37:23.355153   14777 out.go:352] Setting JSON to true
	I0827 21:37:23.356012   14777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1190,"bootTime":1724793453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 21:37:23.356075   14777 start.go:139] virtualization: kvm guest
	I0827 21:37:23.358475   14777 out.go:97] [download-only-611779] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0827 21:37:23.358570   14777 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball: no such file or directory
	I0827 21:37:23.358612   14777 notify.go:220] Checking for updates...
	I0827 21:37:23.360053   14777 out.go:169] MINIKUBE_LOCATION=19522
	I0827 21:37:23.361380   14777 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 21:37:23.362674   14777 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 21:37:23.363924   14777 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 21:37:23.365150   14777 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0827 21:37:23.367260   14777 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 21:37:23.367549   14777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 21:37:23.466861   14777 out.go:97] Using the kvm2 driver based on user configuration
	I0827 21:37:23.466889   14777 start.go:297] selected driver: kvm2
	I0827 21:37:23.466903   14777 start.go:901] validating driver "kvm2" against <nil>
	I0827 21:37:23.467321   14777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:37:23.467469   14777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 21:37:23.482573   14777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 21:37:23.482630   14777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 21:37:23.483104   14777 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0827 21:37:23.483269   14777 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 21:37:23.483362   14777 cni.go:84] Creating CNI manager for ""
	I0827 21:37:23.483378   14777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 21:37:23.483389   14777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 21:37:23.483458   14777 start.go:340] cluster config:
	{Name:download-only-611779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-611779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 21:37:23.483650   14777 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:37:23.485683   14777 out.go:97] Downloading VM boot image ...
	I0827 21:37:23.485737   14777 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/iso/amd64/minikube-v1.33.1-1724692311-19511-amd64.iso
	I0827 21:37:35.097497   14777 out.go:97] Starting "download-only-611779" primary control-plane node in "download-only-611779" cluster
	I0827 21:37:35.097519   14777 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 21:37:35.192326   14777 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0827 21:37:35.192387   14777 cache.go:56] Caching tarball of preloaded images
	I0827 21:37:35.192593   14777 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0827 21:37:35.194560   14777 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0827 21:37:35.194589   14777 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0827 21:37:35.292100   14777 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-611779 host does not exist
	  To start a cluster, run: "minikube start -p download-only-611779"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-611779
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-954539 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-954539 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.447068322s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-954539
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-954539: exit status 85 (55.34344ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-611779 | jenkins | v1.33.1 | 27 Aug 24 21:37 UTC |                     |
	|         | -p download-only-611779        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 27 Aug 24 21:37 UTC | 27 Aug 24 21:37 UTC |
	| delete  | -p download-only-611779        | download-only-611779 | jenkins | v1.33.1 | 27 Aug 24 21:37 UTC | 27 Aug 24 21:37 UTC |
	| start   | -o=json --download-only        | download-only-954539 | jenkins | v1.33.1 | 27 Aug 24 21:37 UTC |                     |
	|         | -p download-only-954539        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 21:37:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 21:37:51.218157   15052 out.go:345] Setting OutFile to fd 1 ...
	I0827 21:37:51.218404   15052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:37:51.218412   15052 out.go:358] Setting ErrFile to fd 2...
	I0827 21:37:51.218417   15052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 21:37:51.218586   15052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 21:37:51.219091   15052 out.go:352] Setting JSON to true
	I0827 21:37:51.219911   15052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1218,"bootTime":1724793453,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 21:37:51.219964   15052 start.go:139] virtualization: kvm guest
	I0827 21:37:51.221963   15052 out.go:97] [download-only-954539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 21:37:51.222104   15052 notify.go:220] Checking for updates...
	I0827 21:37:51.223430   15052 out.go:169] MINIKUBE_LOCATION=19522
	I0827 21:37:51.224901   15052 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 21:37:51.226255   15052 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 21:37:51.227702   15052 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 21:37:51.229027   15052 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0827 21:37:51.231783   15052 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 21:37:51.232043   15052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 21:37:51.263603   15052 out.go:97] Using the kvm2 driver based on user configuration
	I0827 21:37:51.263644   15052 start.go:297] selected driver: kvm2
	I0827 21:37:51.263657   15052 start.go:901] validating driver "kvm2" against <nil>
	I0827 21:37:51.263973   15052 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:37:51.264061   15052 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19522-7571/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0827 21:37:51.278868   15052 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0827 21:37:51.278918   15052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 21:37:51.279378   15052 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0827 21:37:51.279519   15052 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 21:37:51.279581   15052 cni.go:84] Creating CNI manager for ""
	I0827 21:37:51.279593   15052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0827 21:37:51.279600   15052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 21:37:51.279651   15052 start.go:340] cluster config:
	{Name:download-only-954539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-954539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 21:37:51.279764   15052 iso.go:125] acquiring lock: {Name:mk7d8bf57991642fd581f9e8cbc67737b455b805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 21:37:51.281572   15052 out.go:97] Starting "download-only-954539" primary control-plane node in "download-only-954539" cluster
	I0827 21:37:51.281584   15052 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 21:37:51.775128   15052 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0827 21:37:51.775199   15052 cache.go:56] Caching tarball of preloaded images
	I0827 21:37:51.775374   15052 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0827 21:37:51.777225   15052 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0827 21:37:51.777237   15052 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0827 21:37:51.872454   15052 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19522-7571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-954539 host does not exist
	  To start a cluster, run: "minikube start -p download-only-954539"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-954539
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-390352 --alsologtostderr --binary-mirror http://127.0.0.1:36835 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-390352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-390352
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (117.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-873413 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-873413 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m56.399590662s)
helpers_test.go:175: Cleaning up "offline-crio-873413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-873413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-873413: (1.021927995s)
--- PASS: TestOffline (117.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-709833
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-709833: exit status 85 (45.572607ms)

                                                
                                                
-- stdout --
	* Profile "addons-709833" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-709833"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-709833
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-709833: exit status 85 (44.682674ms)

                                                
                                                
-- stdout --
	* Profile "addons-709833" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-709833"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestCertOptions (79.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-950327 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-950327 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m18.290762731s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-950327 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-950327 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-950327 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-950327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-950327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-950327: (1.016160424s)
--- PASS: TestCertOptions (79.76s)

                                                
                                    
x
+
TestCertExpiration (279.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-649861 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-649861 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (59.553348852s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-649861 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-649861 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.413045408s)
helpers_test.go:175: Cleaning up "cert-expiration-649861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-649861
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-649861: (1.077496358s)
--- PASS: TestCertExpiration (279.04s)

                                                
                                    
x
+
TestForceSystemdFlag (101.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-877373 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-877373 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.686996045s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-877373 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-877373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-877373
--- PASS: TestForceSystemdFlag (101.86s)

                                                
                                    
x
+
TestForceSystemdEnv (45.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-911409 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-911409 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.114651203s)
helpers_test.go:175: Cleaning up "force-systemd-env-911409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-911409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-911409: (1.320955901s)
--- PASS: TestForceSystemdEnv (45.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.01s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.01s)

                                                
                                    
x
+
TestErrorSpam/setup (43.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-504368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-504368 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-504368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-504368 --driver=kvm2  --container-runtime=crio: (43.939460955s)
--- PASS: TestErrorSpam/setup (43.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop: (1.597505761s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop: (1.461149421s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-504368 --log_dir /tmp/nospam-504368 stop: (1.709467708s)
--- PASS: TestErrorSpam/stop (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19522-7571/.minikube/files/etc/test/nested/copy/14765/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-299635 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.220194541s)
--- PASS: TestFunctional/serial/StartWithProxy (52.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-299635 --alsologtostderr -v=8: (36.295859848s)
functional_test.go:663: soft start took 36.296673649s for "functional-299635" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-299635 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:3.1: (1.442351384s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:3.3: (1.409084144s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 cache add registry.k8s.io/pause:latest: (1.27346479s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-299635 /tmp/TestFunctionalserialCacheCmdcacheadd_local139707706/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache add minikube-local-cache-test:functional-299635
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 cache add minikube-local-cache-test:functional-299635: (1.762234706s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache delete minikube-local-cache-test:functional-299635
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-299635
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.790613ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 cache reload: (1.057736447s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 kubectl -- --context functional-299635 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-299635 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-299635 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.153722051s)
functional_test.go:761: restart took 37.153847272s for "functional-299635" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-299635 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 logs: (1.345630692s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 logs --file /tmp/TestFunctionalserialLogsFileCmd3307208673/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 logs --file /tmp/TestFunctionalserialLogsFileCmd3307208673/001/logs.txt: (1.384305155s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-299635 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-299635
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-299635: exit status 115 (265.285966ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.110:30871 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-299635 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 config get cpus: exit status 14 (50.865648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 config get cpus: exit status 14 (50.885242ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-299635 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-299635 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28059: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-299635 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.34358ms)

                                                
                                                
-- stdout --
	* [functional-299635] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:21:44.744041   27803 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:21:44.744143   27803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:21:44.744152   27803 out.go:358] Setting ErrFile to fd 2...
	I0827 22:21:44.744156   27803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:21:44.744353   27803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:21:44.744885   27803 out.go:352] Setting JSON to false
	I0827 22:21:44.745741   27803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3852,"bootTime":1724793453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:21:44.745792   27803 start.go:139] virtualization: kvm guest
	I0827 22:21:44.748108   27803 out.go:177] * [functional-299635] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0827 22:21:44.749470   27803 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:21:44.749481   27803 notify.go:220] Checking for updates...
	I0827 22:21:44.751993   27803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:21:44.753327   27803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:21:44.754365   27803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:21:44.755804   27803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:21:44.757102   27803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:21:44.759005   27803 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:21:44.759597   27803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:21:44.759683   27803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:21:44.774715   27803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0827 22:21:44.775105   27803 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:21:44.775639   27803 main.go:141] libmachine: Using API Version  1
	I0827 22:21:44.775658   27803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:21:44.776012   27803 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:21:44.776187   27803 main.go:141] libmachine: (functional-299635) Calling .DriverName
	I0827 22:21:44.776484   27803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:21:44.776787   27803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:21:44.776824   27803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:21:44.791763   27803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0827 22:21:44.792140   27803 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:21:44.792783   27803 main.go:141] libmachine: Using API Version  1
	I0827 22:21:44.792808   27803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:21:44.793157   27803 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:21:44.793403   27803 main.go:141] libmachine: (functional-299635) Calling .DriverName
	I0827 22:21:44.826021   27803 out.go:177] * Using the kvm2 driver based on existing profile
	I0827 22:21:44.827373   27803 start.go:297] selected driver: kvm2
	I0827 22:21:44.827388   27803 start.go:901] validating driver "kvm2" against &{Name:functional-299635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-299635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:21:44.827513   27803 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:21:44.829797   27803 out.go:201] 
	W0827 22:21:44.831162   27803 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0827 22:21:44.832437   27803 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299635 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-299635 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.585012ms)

                                                
                                                
-- stdout --
	* [functional-299635] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:21:42.017939   27482 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:21:42.018049   27482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:21:42.018059   27482 out.go:358] Setting ErrFile to fd 2...
	I0827 22:21:42.018064   27482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:21:42.018369   27482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:21:42.018888   27482 out.go:352] Setting JSON to false
	I0827 22:21:42.019747   27482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3849,"bootTime":1724793453,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0827 22:21:42.019812   27482 start.go:139] virtualization: kvm guest
	I0827 22:21:42.022195   27482 out.go:177] * [functional-299635] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0827 22:21:42.024008   27482 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:21:42.024008   27482 notify.go:220] Checking for updates...
	I0827 22:21:42.027235   27482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:21:42.028766   27482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	I0827 22:21:42.030148   27482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	I0827 22:21:42.031544   27482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0827 22:21:42.032866   27482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:21:42.034444   27482 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:21:42.034817   27482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:21:42.034866   27482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:21:42.049656   27482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35423
	I0827 22:21:42.050169   27482 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:21:42.050756   27482 main.go:141] libmachine: Using API Version  1
	I0827 22:21:42.050779   27482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:21:42.051160   27482 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:21:42.051356   27482 main.go:141] libmachine: (functional-299635) Calling .DriverName
	I0827 22:21:42.051762   27482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:21:42.052223   27482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:21:42.052272   27482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:21:42.066476   27482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0827 22:21:42.066809   27482 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:21:42.067274   27482 main.go:141] libmachine: Using API Version  1
	I0827 22:21:42.067300   27482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:21:42.067573   27482 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:21:42.067732   27482 main.go:141] libmachine: (functional-299635) Calling .DriverName
	I0827 22:21:42.098586   27482 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0827 22:21:42.100002   27482 start.go:297] selected driver: kvm2
	I0827 22:21:42.100019   27482 start.go:901] validating driver "kvm2" against &{Name:functional-299635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19511/minikube-v1.33.1-1724692311-19511-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-299635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:21:42.100148   27482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:21:42.102277   27482 out.go:201] 
	W0827 22:21:42.103640   27482 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0827 22:21:42.104878   27482 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-299635 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-299635 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-scbqn" [6eab4d60-2826-4086-8df8-bed98b86abd7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-scbqn" [6eab4d60-2826-4086-8df8-bed98b86abd7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.140639859s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.110:31746
functional_test.go:1675: http://192.168.39.110:31746: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-scbqn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.110:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.110:31746
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a087517b-547e-4569-a3f5-5e4e39001878] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003843293s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-299635 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-299635 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-299635 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-299635 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-299635 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e376b7e9-9fb2-4dd3-8861-954c68779cf0] Pending
helpers_test.go:344: "sp-pod" [e376b7e9-9fb2-4dd3-8861-954c68779cf0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e376b7e9-9fb2-4dd3-8861-954c68779cf0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.003284552s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-299635 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-299635 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-299635 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb64b3da-1db0-4135-85da-919386083eca] Pending
helpers_test.go:344: "sp-pod" [bb64b3da-1db0-4135-85da-919386083eca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb64b3da-1db0-4135-85da-919386083eca] Running
2024/08/27 22:22:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004960729s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-299635 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh -n functional-299635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cp functional-299635:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4193394015/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh -n functional-299635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh -n functional-299635 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-299635 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-9l7p8" [27226054-3ec7-4115-be4e-daa1a3baa1da] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-9l7p8" [27226054-3ec7-4115-be4e-daa1a3baa1da] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.123936625s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-299635 exec mysql-6cdb49bbb-9l7p8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-299635 exec mysql-6cdb49bbb-9l7p8 -- mysql -ppassword -e "show databases;": exit status 1 (243.371806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-299635 exec mysql-6cdb49bbb-9l7p8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-299635 exec mysql-6cdb49bbb-9l7p8 -- mysql -ppassword -e "show databases;": exit status 1 (535.540372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-299635 exec mysql-6cdb49bbb-9l7p8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14765/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /etc/test/nested/copy/14765/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14765.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /etc/ssl/certs/14765.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14765.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /usr/share/ca-certificates/14765.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/147652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /etc/ssl/certs/147652.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/147652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /usr/share/ca-certificates/147652.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-299635 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh "sudo systemctl is-active docker": exit status 1 (238.069577ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh "sudo systemctl is-active containerd": exit status 1 (265.889591ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "266.390436ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.801107ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "249.516196ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.930943ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-299635 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-299635 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bfdxd" [d3623140-f941-40e1-9ca1-7230f07f0d99] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bfdxd" [d3623140-f941-40e1-9ca1-7230f07f0d99] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.004493637s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdany-port2154858350/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724797302109426186" to /tmp/TestFunctionalparallelMountCmdany-port2154858350/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724797302109426186" to /tmp/TestFunctionalparallelMountCmdany-port2154858350/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724797302109426186" to /tmp/TestFunctionalparallelMountCmdany-port2154858350/001/test-1724797302109426186
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.603044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 27 22:21 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 27 22:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 27 22:21 test-1724797302109426186
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh cat /mount-9p/test-1724797302109426186
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-299635 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6f2e5f4b-0280-4c29-b107-a3bd988a28dd] Pending
helpers_test.go:344: "busybox-mount" [6f2e5f4b-0280-4c29-b107-a3bd988a28dd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6f2e5f4b-0280-4c29-b107-a3bd988a28dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6f2e5f4b-0280-4c29-b107-a3bd988a28dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003360077s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-299635 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdany-port2154858350/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service list -o json
functional_test.go:1494: Took "435.261692ms" to run "out/minikube-linux-amd64 -p functional-299635 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.110:31611
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.110:31611
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299635 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-299635
localhost/kicbase/echo-server:functional-299635
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299635 image ls --format short --alsologtostderr:
I0827 22:21:58.708976   29062 out.go:345] Setting OutFile to fd 1 ...
I0827 22:21:58.709114   29062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.709124   29062 out.go:358] Setting ErrFile to fd 2...
I0827 22:21:58.709130   29062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.709343   29062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
I0827 22:21:58.710114   29062 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.710219   29062 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.710592   29062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.710631   29062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.725749   29062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
I0827 22:21:58.726322   29062 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.727021   29062 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.727049   29062 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.727338   29062 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.727541   29062 main.go:141] libmachine: (functional-299635) Calling .GetState
I0827 22:21:58.729746   29062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.729792   29062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.744945   29062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
I0827 22:21:58.745335   29062 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.745836   29062 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.745857   29062 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.746220   29062 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.746367   29062 main.go:141] libmachine: (functional-299635) Calling .DriverName
I0827 22:21:58.746522   29062 ssh_runner.go:195] Run: systemctl --version
I0827 22:21:58.746537   29062 main.go:141] libmachine: (functional-299635) Calling .GetSSHHostname
I0827 22:21:58.749774   29062 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.750114   29062 main.go:141] libmachine: (functional-299635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:fe:c2", ip: ""} in network mk-functional-299635: {Iface:virbr1 ExpiryTime:2024-08-27 23:19:13 +0000 UTC Type:0 Mac:52:54:00:4e:fe:c2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-299635 Clientid:01:52:54:00:4e:fe:c2}
I0827 22:21:58.750146   29062 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.750302   29062 main.go:141] libmachine: (functional-299635) Calling .GetSSHPort
I0827 22:21:58.750435   29062 main.go:141] libmachine: (functional-299635) Calling .GetSSHKeyPath
I0827 22:21:58.750562   29062 main.go:141] libmachine: (functional-299635) Calling .GetSSHUsername
I0827 22:21:58.750689   29062 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/functional-299635/id_rsa Username:docker}
I0827 22:21:58.842667   29062 ssh_runner.go:195] Run: sudo crictl images --output json
I0827 22:21:58.884003   29062 main.go:141] libmachine: Making call to close driver server
I0827 22:21:58.884015   29062 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:58.884350   29062 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:58.884401   29062 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:21:58.884402   29062 main.go:141] libmachine: Making call to close connection to plugin binary
I0827 22:21:58.884442   29062 main.go:141] libmachine: Making call to close driver server
I0827 22:21:58.884451   29062 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:58.884693   29062 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:58.884696   29062 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:21:58.884710   29062 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299635 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| localhost/kicbase/echo-server           | functional-299635  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-299635  | 59ab775e57c50 | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299635 image ls --format table --alsologtostderr:
I0827 22:21:59.155347   29172 out.go:345] Setting OutFile to fd 1 ...
I0827 22:21:59.155570   29172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:59.155580   29172 out.go:358] Setting ErrFile to fd 2...
I0827 22:21:59.155584   29172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:59.155752   29172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
I0827 22:21:59.156293   29172 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:59.156393   29172 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:59.156788   29172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:59.156836   29172 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:59.170852   29172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
I0827 22:21:59.171288   29172 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:59.171812   29172 main.go:141] libmachine: Using API Version  1
I0827 22:21:59.171829   29172 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:59.172183   29172 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:59.172343   29172 main.go:141] libmachine: (functional-299635) Calling .GetState
I0827 22:21:59.174125   29172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:59.174165   29172 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:59.187786   29172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
I0827 22:21:59.188159   29172 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:59.188699   29172 main.go:141] libmachine: Using API Version  1
I0827 22:21:59.188735   29172 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:59.189101   29172 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:59.189288   29172 main.go:141] libmachine: (functional-299635) Calling .DriverName
I0827 22:21:59.189498   29172 ssh_runner.go:195] Run: systemctl --version
I0827 22:21:59.189523   29172 main.go:141] libmachine: (functional-299635) Calling .GetSSHHostname
I0827 22:21:59.192201   29172 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:59.192599   29172 main.go:141] libmachine: (functional-299635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:fe:c2", ip: ""} in network mk-functional-299635: {Iface:virbr1 ExpiryTime:2024-08-27 23:19:13 +0000 UTC Type:0 Mac:52:54:00:4e:fe:c2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-299635 Clientid:01:52:54:00:4e:fe:c2}
I0827 22:21:59.192633   29172 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:59.192703   29172 main.go:141] libmachine: (functional-299635) Calling .GetSSHPort
I0827 22:21:59.192852   29172 main.go:141] libmachine: (functional-299635) Calling .GetSSHKeyPath
I0827 22:21:59.192995   29172 main.go:141] libmachine: (functional-299635) Calling .GetSSHUsername
I0827 22:21:59.193141   29172 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/functional-299635/id_rsa Username:docker}
I0827 22:21:59.275045   29172 ssh_runner.go:195] Run: sudo crictl images --output json
I0827 22:21:59.311074   29172 main.go:141] libmachine: Making call to close driver server
I0827 22:21:59.311094   29172 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:59.311382   29172 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:59.311400   29172 main.go:141] libmachine: Making call to close connection to plugin binary
I0827 22:21:59.311409   29172 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:21:59.311412   29172 main.go:141] libmachine: Making call to close driver server
I0827 22:21:59.311439   29172 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:59.311657   29172 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:59.311676   29172 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299635 image ls --format json --alsologtostderr:
[{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a
261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"59ab775e57c507af2494c0b3e504e96d68b68897195e1a285e55c68c636203d5","repoDigests":["localhost/minikube-local-cache-test@sha256:b77279e83f92832e8e31a53e7c5f41240709ca017033bebbc716b541dba83075"],"repoTags":["localhost/minikube-local-cache-test:functional-299635"],"size":"3330"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/
kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab9
89956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641c
d8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55
d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-299635"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac28746
3b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299635 image ls --format json --alsologtostderr:
I0827 22:21:58.935771   29109 out.go:345] Setting OutFile to fd 1 ...
I0827 22:21:58.935867   29109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.935874   29109 out.go:358] Setting ErrFile to fd 2...
I0827 22:21:58.935879   29109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.936073   29109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
I0827 22:21:58.936651   29109 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.936753   29109 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.937106   29109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.937152   29109 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.954608   29109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35593
I0827 22:21:58.955121   29109 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.955661   29109 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.955686   29109 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.956021   29109 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.956225   29109 main.go:141] libmachine: (functional-299635) Calling .GetState
I0827 22:21:58.958352   29109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.958391   29109 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.972311   29109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40373
I0827 22:21:58.972773   29109 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.973268   29109 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.973304   29109 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.973637   29109 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.973864   29109 main.go:141] libmachine: (functional-299635) Calling .DriverName
I0827 22:21:58.974051   29109 ssh_runner.go:195] Run: systemctl --version
I0827 22:21:58.974079   29109 main.go:141] libmachine: (functional-299635) Calling .GetSSHHostname
I0827 22:21:58.977007   29109 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.977479   29109 main.go:141] libmachine: (functional-299635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:fe:c2", ip: ""} in network mk-functional-299635: {Iface:virbr1 ExpiryTime:2024-08-27 23:19:13 +0000 UTC Type:0 Mac:52:54:00:4e:fe:c2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-299635 Clientid:01:52:54:00:4e:fe:c2}
I0827 22:21:58.977517   29109 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.977638   29109 main.go:141] libmachine: (functional-299635) Calling .GetSSHPort
I0827 22:21:58.977813   29109 main.go:141] libmachine: (functional-299635) Calling .GetSSHKeyPath
I0827 22:21:58.977965   29109 main.go:141] libmachine: (functional-299635) Calling .GetSSHUsername
I0827 22:21:58.978106   29109 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/functional-299635/id_rsa Username:docker}
I0827 22:21:59.066470   29109 ssh_runner.go:195] Run: sudo crictl images --output json
I0827 22:21:59.103711   29109 main.go:141] libmachine: Making call to close driver server
I0827 22:21:59.103724   29109 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:59.103956   29109 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:59.103976   29109 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:21:59.103979   29109 main.go:141] libmachine: Making call to close connection to plugin binary
I0827 22:21:59.104021   29109 main.go:141] libmachine: Making call to close driver server
I0827 22:21:59.104032   29109 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:59.104238   29109 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:59.104252   29109 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299635 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-299635
size: "4943877"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 59ab775e57c507af2494c0b3e504e96d68b68897195e1a285e55c68c636203d5
repoDigests:
- localhost/minikube-local-cache-test@sha256:b77279e83f92832e8e31a53e7c5f41240709ca017033bebbc716b541dba83075
repoTags:
- localhost/minikube-local-cache-test:functional-299635
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299635 image ls --format yaml --alsologtostderr:
I0827 22:21:58.709920   29063 out.go:345] Setting OutFile to fd 1 ...
I0827 22:21:58.710168   29063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.710178   29063 out.go:358] Setting ErrFile to fd 2...
I0827 22:21:58.710182   29063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:58.710358   29063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
I0827 22:21:58.710879   29063 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.710968   29063 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:58.711482   29063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.711538   29063 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.726090   29063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
I0827 22:21:58.726568   29063 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.727078   29063 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.727126   29063 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.727495   29063 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.727720   29063 main.go:141] libmachine: (functional-299635) Calling .GetState
I0827 22:21:58.729885   29063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:58.729923   29063 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:58.745057   29063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
I0827 22:21:58.745425   29063 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:58.746048   29063 main.go:141] libmachine: Using API Version  1
I0827 22:21:58.746068   29063 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:58.746503   29063 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:58.746705   29063 main.go:141] libmachine: (functional-299635) Calling .DriverName
I0827 22:21:58.746917   29063 ssh_runner.go:195] Run: systemctl --version
I0827 22:21:58.746953   29063 main.go:141] libmachine: (functional-299635) Calling .GetSSHHostname
I0827 22:21:58.750147   29063 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.750653   29063 main.go:141] libmachine: (functional-299635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:fe:c2", ip: ""} in network mk-functional-299635: {Iface:virbr1 ExpiryTime:2024-08-27 23:19:13 +0000 UTC Type:0 Mac:52:54:00:4e:fe:c2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-299635 Clientid:01:52:54:00:4e:fe:c2}
I0827 22:21:58.750763   29063 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:58.751010   29063 main.go:141] libmachine: (functional-299635) Calling .GetSSHPort
I0827 22:21:58.751256   29063 main.go:141] libmachine: (functional-299635) Calling .GetSSHKeyPath
I0827 22:21:58.751410   29063 main.go:141] libmachine: (functional-299635) Calling .GetSSHUsername
I0827 22:21:58.751586   29063 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/functional-299635/id_rsa Username:docker}
I0827 22:21:58.842851   29063 ssh_runner.go:195] Run: sudo crictl images --output json
I0827 22:21:58.889118   29063 main.go:141] libmachine: Making call to close driver server
I0827 22:21:58.889132   29063 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:58.889472   29063 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:58.889492   29063 main.go:141] libmachine: Making call to close connection to plugin binary
I0827 22:21:58.889506   29063 main.go:141] libmachine: Making call to close driver server
I0827 22:21:58.889505   29063 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:21:58.889514   29063 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:21:58.889852   29063 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:21:58.889865   29063 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh pgrep buildkitd: exit status 1 (210.204949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image build -t localhost/my-image:functional-299635 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 image build -t localhost/my-image:functional-299635 testdata/build --alsologtostderr: (2.833566243s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299635 image build -t localhost/my-image:functional-299635 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b32f7fb87a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-299635
--> 043e0b4943c
Successfully tagged localhost/my-image:functional-299635
043e0b4943c1dad4dd71fef2af5882d0f13abce681a3d22527247a537f11836f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299635 image build -t localhost/my-image:functional-299635 testdata/build --alsologtostderr:
I0827 22:21:59.150994   29166 out.go:345] Setting OutFile to fd 1 ...
I0827 22:21:59.151269   29166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:59.151279   29166 out.go:358] Setting ErrFile to fd 2...
I0827 22:21:59.151284   29166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:21:59.151458   29166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
I0827 22:21:59.152016   29166 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:59.152532   29166 config.go:182] Loaded profile config "functional-299635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0827 22:21:59.152945   29166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:59.153010   29166 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:59.167779   29166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
I0827 22:21:59.168249   29166 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:59.168822   29166 main.go:141] libmachine: Using API Version  1
I0827 22:21:59.168846   29166 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:59.169249   29166 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:59.169482   29166 main.go:141] libmachine: (functional-299635) Calling .GetState
I0827 22:21:59.171353   29166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0827 22:21:59.171395   29166 main.go:141] libmachine: Launching plugin server for driver kvm2
I0827 22:21:59.185674   29166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
I0827 22:21:59.186005   29166 main.go:141] libmachine: () Calling .GetVersion
I0827 22:21:59.186437   29166 main.go:141] libmachine: Using API Version  1
I0827 22:21:59.186459   29166 main.go:141] libmachine: () Calling .SetConfigRaw
I0827 22:21:59.186742   29166 main.go:141] libmachine: () Calling .GetMachineName
I0827 22:21:59.186923   29166 main.go:141] libmachine: (functional-299635) Calling .DriverName
I0827 22:21:59.187111   29166 ssh_runner.go:195] Run: systemctl --version
I0827 22:21:59.187143   29166 main.go:141] libmachine: (functional-299635) Calling .GetSSHHostname
I0827 22:21:59.189955   29166 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:59.190413   29166 main.go:141] libmachine: (functional-299635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:fe:c2", ip: ""} in network mk-functional-299635: {Iface:virbr1 ExpiryTime:2024-08-27 23:19:13 +0000 UTC Type:0 Mac:52:54:00:4e:fe:c2 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-299635 Clientid:01:52:54:00:4e:fe:c2}
I0827 22:21:59.190438   29166 main.go:141] libmachine: (functional-299635) DBG | domain functional-299635 has defined IP address 192.168.39.110 and MAC address 52:54:00:4e:fe:c2 in network mk-functional-299635
I0827 22:21:59.190584   29166 main.go:141] libmachine: (functional-299635) Calling .GetSSHPort
I0827 22:21:59.190778   29166 main.go:141] libmachine: (functional-299635) Calling .GetSSHKeyPath
I0827 22:21:59.190996   29166 main.go:141] libmachine: (functional-299635) Calling .GetSSHUsername
I0827 22:21:59.191177   29166 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/functional-299635/id_rsa Username:docker}
I0827 22:21:59.274793   29166 build_images.go:161] Building image from path: /tmp/build.1896065271.tar
I0827 22:21:59.274864   29166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0827 22:21:59.286512   29166 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1896065271.tar
I0827 22:21:59.292681   29166 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1896065271.tar: stat -c "%s %y" /var/lib/minikube/build/build.1896065271.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1896065271.tar': No such file or directory
I0827 22:21:59.292708   29166 ssh_runner.go:362] scp /tmp/build.1896065271.tar --> /var/lib/minikube/build/build.1896065271.tar (3072 bytes)
I0827 22:21:59.335092   29166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1896065271
I0827 22:21:59.344878   29166 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1896065271 -xf /var/lib/minikube/build/build.1896065271.tar
I0827 22:21:59.356550   29166 crio.go:315] Building image: /var/lib/minikube/build/build.1896065271
I0827 22:21:59.356626   29166 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-299635 /var/lib/minikube/build/build.1896065271 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0827 22:22:01.913140   29166 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-299635 /var/lib/minikube/build/build.1896065271 --cgroup-manager=cgroupfs: (2.556481382s)
I0827 22:22:01.913221   29166 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1896065271
I0827 22:22:01.924658   29166 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1896065271.tar
I0827 22:22:01.934320   29166 build_images.go:217] Built localhost/my-image:functional-299635 from /tmp/build.1896065271.tar
I0827 22:22:01.934353   29166 build_images.go:133] succeeded building to: functional-299635
I0827 22:22:01.934359   29166 build_images.go:134] failed building to: 
I0827 22:22:01.934387   29166 main.go:141] libmachine: Making call to close driver server
I0827 22:22:01.934402   29166 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:22:01.934661   29166 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:22:01.934680   29166 main.go:141] libmachine: Making call to close connection to plugin binary
I0827 22:22:01.934690   29166 main.go:141] libmachine: Making call to close driver server
I0827 22:22:01.934699   29166 main.go:141] libmachine: (functional-299635) Calling .Close
I0827 22:22:01.934722   29166 main.go:141] libmachine: (functional-299635) DBG | Closing plugin on server side
I0827 22:22:01.934933   29166 main.go:141] libmachine: Successfully made call to close driver server
I0827 22:22:01.934949   29166 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.689931301s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-299635
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image load --daemon kicbase/echo-server:functional-299635 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 image load --daemon kicbase/echo-server:functional-299635 --alsologtostderr: (1.865711963s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image load --daemon kicbase/echo-server:functional-299635 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdspecific-port2375253649/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.513583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdspecific-port2375253649/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299635 ssh "sudo umount -f /mount-9p": exit status 1 (199.524654ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-299635 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdspecific-port2375253649/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-299635
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image load --daemon kicbase/echo-server:functional-299635 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-299635 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299635 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3148410995/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image save kicbase/echo-server:functional-299635 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image rm kicbase/echo-server:functional-299635 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-299635 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.59885147s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-299635
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-299635 image save --daemon kicbase/echo-server:functional-299635 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-299635
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-299635
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-299635
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-299635
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-158602 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-158602 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.22259334s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-158602 -- rollout status deployment/busybox: (3.974168849s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-crtgh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-gxvsc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-hmcwr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-crtgh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-gxvsc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-hmcwr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-crtgh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-gxvsc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-hmcwr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-crtgh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-crtgh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-gxvsc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-gxvsc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-hmcwr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-158602 -- exec busybox-7dff88458-hmcwr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-158602 -v=7 --alsologtostderr
E0827 22:26:21.248891   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.255971   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.267368   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.288682   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.330072   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.411550   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.573393   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:21.894679   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:22.536616   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:26:23.818250   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-158602 -v=7 --alsologtostderr: (55.582919058s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-158602 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status --output json -v=7 --alsologtostderr
E0827 22:26:26.379769   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp testdata/cp-test.txt ha-158602:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602:/home/docker/cp-test.txt ha-158602-m02:/home/docker/cp-test_ha-158602_ha-158602-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test_ha-158602_ha-158602-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602:/home/docker/cp-test.txt ha-158602-m03:/home/docker/cp-test_ha-158602_ha-158602-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test_ha-158602_ha-158602-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602:/home/docker/cp-test.txt ha-158602-m04:/home/docker/cp-test_ha-158602_ha-158602-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test_ha-158602_ha-158602-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp testdata/cp-test.txt ha-158602-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m02:/home/docker/cp-test.txt ha-158602:/home/docker/cp-test_ha-158602-m02_ha-158602.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test_ha-158602-m02_ha-158602.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m02:/home/docker/cp-test.txt ha-158602-m03:/home/docker/cp-test_ha-158602-m02_ha-158602-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test_ha-158602-m02_ha-158602-m03.txt"
E0827 22:26:31.502102   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m02:/home/docker/cp-test.txt ha-158602-m04:/home/docker/cp-test_ha-158602-m02_ha-158602-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test_ha-158602-m02_ha-158602-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp testdata/cp-test.txt ha-158602-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt ha-158602:/home/docker/cp-test_ha-158602-m03_ha-158602.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test_ha-158602-m03_ha-158602.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt ha-158602-m02:/home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test_ha-158602-m03_ha-158602-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m03:/home/docker/cp-test.txt ha-158602-m04:/home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test_ha-158602-m03_ha-158602-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp testdata/cp-test.txt ha-158602-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2080796798/001/cp-test_ha-158602-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt ha-158602:/home/docker/cp-test_ha-158602-m04_ha-158602.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602 "sudo cat /home/docker/cp-test_ha-158602-m04_ha-158602.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt ha-158602-m02:/home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m02 "sudo cat /home/docker/cp-test_ha-158602-m04_ha-158602-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 cp ha-158602-m04:/home/docker/cp-test.txt ha-158602-m03:/home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 ssh -n ha-158602-m03 "sudo cat /home/docker/cp-test_ha-158602-m04_ha-158602-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.465399349s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-158602 node delete m03 -v=7 --alsologtostderr: (15.974538339s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (340.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-158602 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0827 22:41:21.249139   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:42:44.312577   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-158602 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m40.170296813s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (340.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-158602 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-158602 --control-plane -v=7 --alsologtostderr: (1m17.217928448s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-158602 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-103624 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0827 22:46:21.248274   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-103624 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.044008349s)
--- PASS: TestJSONOutput/start/Command (75.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-103624 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-103624 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-103624 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-103624 --output=json --user=testUser: (6.58739197s)
--- PASS: TestJSONOutput/stop/Command (6.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-230291 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-230291 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.954722ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a30b9a3-7687-4e0e-96f8-76c31461015e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-230291] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f12e938e-f8bc-48dd-9886-03de9aded136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"8d47187b-5be5-42a5-ba56-d001c8ed3f7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"419ce6c7-54af-42c6-ac5b-08ca9f4f342e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig"}}
	{"specversion":"1.0","id":"4108afc0-6e45-4161-9be6-8d8daa0d8029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube"}}
	{"specversion":"1.0","id":"54cb2b1b-378c-47e1-b9c5-dcc20e234601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f60f725b-f8b3-4f85-af04-97e18e1bad06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f404c52-cf10-4e9e-804e-894c73ed2b08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-230291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-230291
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-157235 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-157235 --driver=kvm2  --container-runtime=crio: (44.824359564s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-160536 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-160536 --driver=kvm2  --container-runtime=crio: (39.957238816s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-157235
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-160536
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-160536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-160536
helpers_test.go:175: Cleaning up "first-157235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-157235
--- PASS: TestMinikubeProfile (87.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-328308 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-328308 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.494551s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-328308 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-328308 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.250253111s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-328308 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-340808
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-340808: (1.267541096s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-340808
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-340808: (22.84154092s)
--- PASS: TestMountStart/serial/RestartStopped (23.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-340808 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0827 22:51:21.248830   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.744425901s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-465478 -- rollout status deployment/busybox: (4.52311194s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-gcd59 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-j67n7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-gcd59 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-j67n7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-gcd59 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-j67n7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-gcd59 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-gcd59 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-j67n7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-465478 -- exec busybox-7dff88458-j67n7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-465478 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-465478 -v 3 --alsologtostderr: (54.707865257s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-465478 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp testdata/cp-test.txt multinode-465478:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478:/home/docker/cp-test.txt multinode-465478-m02:/home/docker/cp-test_multinode-465478_multinode-465478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test_multinode-465478_multinode-465478-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478:/home/docker/cp-test.txt multinode-465478-m03:/home/docker/cp-test_multinode-465478_multinode-465478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test_multinode-465478_multinode-465478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp testdata/cp-test.txt multinode-465478-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt multinode-465478:/home/docker/cp-test_multinode-465478-m02_multinode-465478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test_multinode-465478-m02_multinode-465478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m02:/home/docker/cp-test.txt multinode-465478-m03:/home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test_multinode-465478-m02_multinode-465478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp testdata/cp-test.txt multinode-465478-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1822655459/001/cp-test_multinode-465478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt multinode-465478:/home/docker/cp-test_multinode-465478-m03_multinode-465478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478 "sudo cat /home/docker/cp-test_multinode-465478-m03_multinode-465478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 cp multinode-465478-m03:/home/docker/cp-test.txt multinode-465478-m02:/home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 ssh -n multinode-465478-m02 "sudo cat /home/docker/cp-test_multinode-465478-m03_multinode-465478-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-465478 node stop m03: (1.380956699s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465478 status: exit status 7 (402.716934ms)

                                                
                                                
-- stdout --
	multinode-465478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-465478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-465478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr: exit status 7 (406.592105ms)

                                                
                                                
-- stdout --
	multinode-465478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-465478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-465478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:52:32.973207   46418 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:52:32.973458   46418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:52:32.973468   46418 out.go:358] Setting ErrFile to fd 2...
	I0827 22:52:32.973473   46418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:52:32.973636   46418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-7571/.minikube/bin
	I0827 22:52:32.973800   46418 out.go:352] Setting JSON to false
	I0827 22:52:32.973822   46418 mustload.go:65] Loading cluster: multinode-465478
	I0827 22:52:32.973928   46418 notify.go:220] Checking for updates...
	I0827 22:52:32.974167   46418 config.go:182] Loaded profile config "multinode-465478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0827 22:52:32.974179   46418 status.go:255] checking status of multinode-465478 ...
	I0827 22:52:32.974561   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:32.974622   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:32.993687   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0827 22:52:32.994079   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:32.994618   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:32.994633   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:32.995023   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:32.995312   46418 main.go:141] libmachine: (multinode-465478) Calling .GetState
	I0827 22:52:32.996827   46418 status.go:330] multinode-465478 host status = "Running" (err=<nil>)
	I0827 22:52:32.996842   46418 host.go:66] Checking if "multinode-465478" exists ...
	I0827 22:52:32.997125   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:32.997167   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.012530   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0827 22:52:33.012948   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.013396   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.013420   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.013721   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.013920   46418 main.go:141] libmachine: (multinode-465478) Calling .GetIP
	I0827 22:52:33.016613   46418 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:52:33.017034   46418 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:52:33.017064   46418 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:52:33.017219   46418 host.go:66] Checking if "multinode-465478" exists ...
	I0827 22:52:33.017520   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:33.017560   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.032626   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0827 22:52:33.033002   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.033624   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.033645   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.033956   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.034144   46418 main.go:141] libmachine: (multinode-465478) Calling .DriverName
	I0827 22:52:33.034321   46418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:52:33.034344   46418 main.go:141] libmachine: (multinode-465478) Calling .GetSSHHostname
	I0827 22:52:33.036973   46418 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:52:33.037384   46418 main.go:141] libmachine: (multinode-465478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:d2:2e", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:49:47 +0000 UTC Type:0 Mac:52:54:00:2b:d2:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:multinode-465478 Clientid:01:52:54:00:2b:d2:2e}
	I0827 22:52:33.037414   46418 main.go:141] libmachine: (multinode-465478) DBG | domain multinode-465478 has defined IP address 192.168.39.203 and MAC address 52:54:00:2b:d2:2e in network mk-multinode-465478
	I0827 22:52:33.037550   46418 main.go:141] libmachine: (multinode-465478) Calling .GetSSHPort
	I0827 22:52:33.037708   46418 main.go:141] libmachine: (multinode-465478) Calling .GetSSHKeyPath
	I0827 22:52:33.037854   46418 main.go:141] libmachine: (multinode-465478) Calling .GetSSHUsername
	I0827 22:52:33.037971   46418 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478/id_rsa Username:docker}
	I0827 22:52:33.121187   46418 ssh_runner.go:195] Run: systemctl --version
	I0827 22:52:33.127399   46418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:52:33.142156   46418 kubeconfig.go:125] found "multinode-465478" server: "https://192.168.39.203:8443"
	I0827 22:52:33.142188   46418 api_server.go:166] Checking apiserver status ...
	I0827 22:52:33.142232   46418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:52:33.155159   46418 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup
	W0827 22:52:33.165160   46418 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1060/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0827 22:52:33.165216   46418 ssh_runner.go:195] Run: ls
	I0827 22:52:33.169137   46418 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0827 22:52:33.173152   46418 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0827 22:52:33.173171   46418 status.go:422] multinode-465478 apiserver status = Running (err=<nil>)
	I0827 22:52:33.173181   46418 status.go:257] multinode-465478 status: &{Name:multinode-465478 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:52:33.173199   46418 status.go:255] checking status of multinode-465478-m02 ...
	I0827 22:52:33.173567   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:33.173617   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.188918   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0827 22:52:33.189363   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.189788   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.189808   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.190117   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.190279   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetState
	I0827 22:52:33.191752   46418 status.go:330] multinode-465478-m02 host status = "Running" (err=<nil>)
	I0827 22:52:33.191767   46418 host.go:66] Checking if "multinode-465478-m02" exists ...
	I0827 22:52:33.192062   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:33.192117   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.206514   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I0827 22:52:33.206932   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.207391   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.207411   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.207707   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.207862   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetIP
	I0827 22:52:33.210308   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | domain multinode-465478-m02 has defined MAC address 52:54:00:10:62:34 in network mk-multinode-465478
	I0827 22:52:33.210707   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:62:34", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:50:50 +0000 UTC Type:0 Mac:52:54:00:10:62:34 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-465478-m02 Clientid:01:52:54:00:10:62:34}
	I0827 22:52:33.210738   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | domain multinode-465478-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:10:62:34 in network mk-multinode-465478
	I0827 22:52:33.210817   46418 host.go:66] Checking if "multinode-465478-m02" exists ...
	I0827 22:52:33.211106   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:33.211137   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.225676   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42867
	I0827 22:52:33.226007   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.226425   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.226445   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.226723   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.226907   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .DriverName
	I0827 22:52:33.227120   46418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:52:33.227138   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetSSHHostname
	I0827 22:52:33.229868   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | domain multinode-465478-m02 has defined MAC address 52:54:00:10:62:34 in network mk-multinode-465478
	I0827 22:52:33.230268   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:62:34", ip: ""} in network mk-multinode-465478: {Iface:virbr1 ExpiryTime:2024-08-27 23:50:50 +0000 UTC Type:0 Mac:52:54:00:10:62:34 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-465478-m02 Clientid:01:52:54:00:10:62:34}
	I0827 22:52:33.230299   46418 main.go:141] libmachine: (multinode-465478-m02) DBG | domain multinode-465478-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:10:62:34 in network mk-multinode-465478
	I0827 22:52:33.230452   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetSSHPort
	I0827 22:52:33.230607   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetSSHKeyPath
	I0827 22:52:33.230742   46418 main.go:141] libmachine: (multinode-465478-m02) Calling .GetSSHUsername
	I0827 22:52:33.230884   46418 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19522-7571/.minikube/machines/multinode-465478-m02/id_rsa Username:docker}
	I0827 22:52:33.306981   46418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:52:33.320164   46418 status.go:257] multinode-465478-m02 status: &{Name:multinode-465478-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0827 22:52:33.320199   46418 status.go:255] checking status of multinode-465478-m03 ...
	I0827 22:52:33.320520   46418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0827 22:52:33.320556   46418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0827 22:52:33.335859   46418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0827 22:52:33.336272   46418 main.go:141] libmachine: () Calling .GetVersion
	I0827 22:52:33.336676   46418 main.go:141] libmachine: Using API Version  1
	I0827 22:52:33.336690   46418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0827 22:52:33.337097   46418 main.go:141] libmachine: () Calling .GetMachineName
	I0827 22:52:33.337265   46418 main.go:141] libmachine: (multinode-465478-m03) Calling .GetState
	I0827 22:52:33.338836   46418 status.go:330] multinode-465478-m03 host status = "Stopped" (err=<nil>)
	I0827 22:52:33.338851   46418 status.go:343] host is not running, skipping remaining checks
	I0827 22:52:33.338858   46418 status.go:257] multinode-465478-m03 status: &{Name:multinode-465478-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-465478 node start m03 -v=7 --alsologtostderr: (37.213716355s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-465478 node delete m03: (1.649573627s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (190.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465478 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0827 23:01:21.249018   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465478 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m9.834599421s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-465478 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (190.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-465478
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465478-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-465478-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (56.394934ms)

                                                
                                                
-- stdout --
	* [multinode-465478-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-465478-m02' is duplicated with machine name 'multinode-465478-m02' in profile 'multinode-465478'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-465478-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-465478-m03 --driver=kvm2  --container-runtime=crio: (42.044918754s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-465478
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-465478: exit status 80 (199.841967ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-465478 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-465478-m03 already exists in multinode-465478-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-465478-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.34s)

                                                
                                    
x
+
TestScheduledStopUnix (114.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-685306 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-685306 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.803629334s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-685306 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-685306 -n scheduled-stop-685306
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-685306 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-685306 --cancel-scheduled
E0827 23:11:21.248902   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-685306 -n scheduled-stop-685306
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-685306
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-685306 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-685306
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-685306: exit status 7 (64.640669ms)

                                                
                                                
-- stdout --
	scheduled-stop-685306
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-685306 -n scheduled-stop-685306
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-685306 -n scheduled-stop-685306: exit status 7 (62.971754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-685306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-685306
--- PASS: TestScheduledStopUnix (114.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (241.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3839591252 start -p running-upgrade-906048 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3839591252 start -p running-upgrade-906048 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m17.248516459s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-906048 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-906048 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.934796846s)
helpers_test.go:175: Cleaning up "running-upgrade-906048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-906048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-906048: (1.183509348s)
--- PASS: TestRunningBinaryUpgrade (241.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (72.423553ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-887820] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-7571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-7571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-887820 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-887820 --driver=kvm2  --container-runtime=crio: (1m34.08323823s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-887820 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.492170432s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-887820 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-887820 status -o json: exit status 2 (268.732554ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-887820","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-887820
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-887820 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.789563826s)
--- PASS: TestNoKubernetes/serial/Start (45.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-887820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-887820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.486199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.086190236s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-887820
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-887820: (1.277379533s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-887820 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-887820 --driver=kvm2  --container-runtime=crio: (43.217089871s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-887820 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-887820 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.701664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2857612826 start -p stopped-upgrade-300892 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0827 23:16:04.316417   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2857612826 start -p stopped-upgrade-300892 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m23.851279714s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2857612826 -p stopped-upgrade-300892 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2857612826 -p stopped-upgrade-300892 stop: (1.394384899s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-300892 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-300892 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.997465194s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.24s)

                                                
                                    
x
+
TestPause/serial/Start (87.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-677405 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0827 23:16:21.248538   14765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-7571/.minikube/profiles/functional-299635/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-677405 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m27.023125777s)
--- PASS: TestPause/serial/Start (87.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-300892
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard