Test Report: KVM_Linux_crio 19411

                    
                      4600b9572ee814234a805050dbc754aa211b5034:2024-08-12:35750
                    
                

Test fail (10/221)

x
+
TestAddons/Setup (2400.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-800382 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-800382 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.955731219s)

                                                
                                                
-- stdout --
	* [addons-800382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-800382" primary control-plane node in "addons-800382" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800382 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-800382 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, cloud-spanner, metrics-server, helm-tiller, inspektor-gadget, ingress-dns, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 11:26:47.169958  471577 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:26:47.170221  471577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:26:47.170235  471577 out.go:304] Setting ErrFile to fd 2...
	I0812 11:26:47.170249  471577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:26:47.170440  471577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 11:26:47.171110  471577 out.go:298] Setting JSON to false
	I0812 11:26:47.172045  471577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11338,"bootTime":1723450669,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:26:47.172108  471577 start.go:139] virtualization: kvm guest
	I0812 11:26:47.174269  471577 out.go:177] * [addons-800382] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:26:47.175842  471577 notify.go:220] Checking for updates...
	I0812 11:26:47.175879  471577 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 11:26:47.177416  471577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:26:47.178953  471577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 11:26:47.180621  471577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:26:47.182297  471577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 11:26:47.183916  471577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 11:26:47.185532  471577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:26:47.218447  471577 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 11:26:47.219869  471577 start.go:297] selected driver: kvm2
	I0812 11:26:47.219890  471577 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:26:47.219902  471577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 11:26:47.220607  471577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:26:47.220707  471577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:26:47.236490  471577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:26:47.236576  471577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:26:47.236796  471577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 11:26:47.236825  471577 cni.go:84] Creating CNI manager for ""
	I0812 11:26:47.236832  471577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:26:47.236840  471577 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:26:47.236898  471577 start.go:340] cluster config:
	{Name:addons-800382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-800382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:26:47.236997  471577 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:26:47.238901  471577 out.go:177] * Starting "addons-800382" primary control-plane node in "addons-800382" cluster
	I0812 11:26:47.240218  471577 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:26:47.240252  471577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:26:47.240271  471577 cache.go:56] Caching tarball of preloaded images
	I0812 11:26:47.240345  471577 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 11:26:47.240355  471577 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 11:26:47.240662  471577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/config.json ...
	I0812 11:26:47.240685  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/config.json: {Name:mke92ac766063a9be3ee467c187e610960a75a7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:26:47.240819  471577 start.go:360] acquireMachinesLock for addons-800382: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 11:26:47.240870  471577 start.go:364] duration metric: took 37.614µs to acquireMachinesLock for "addons-800382"
	I0812 11:26:47.240886  471577 start.go:93] Provisioning new machine with config: &{Name:addons-800382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-800382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:26:47.240943  471577 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 11:26:47.242551  471577 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0812 11:26:47.242786  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:26:47.242849  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:26:47.257874  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0812 11:26:47.258400  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:26:47.259020  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:26:47.259044  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:26:47.259463  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:26:47.259695  471577 main.go:141] libmachine: (addons-800382) Calling .GetMachineName
	I0812 11:26:47.259874  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:26:47.260014  471577 start.go:159] libmachine.API.Create for "addons-800382" (driver="kvm2")
	I0812 11:26:47.260069  471577 client.go:168] LocalClient.Create starting
	I0812 11:26:47.260133  471577 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 11:26:47.456028  471577 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 11:26:47.611569  471577 main.go:141] libmachine: Running pre-create checks...
	I0812 11:26:47.611597  471577 main.go:141] libmachine: (addons-800382) Calling .PreCreateCheck
	I0812 11:26:47.613723  471577 main.go:141] libmachine: (addons-800382) Calling .GetConfigRaw
	I0812 11:26:47.614279  471577 main.go:141] libmachine: Creating machine...
	I0812 11:26:47.614298  471577 main.go:141] libmachine: (addons-800382) Calling .Create
	I0812 11:26:47.614535  471577 main.go:141] libmachine: (addons-800382) Creating KVM machine...
	I0812 11:26:47.615891  471577 main.go:141] libmachine: (addons-800382) DBG | found existing default KVM network
	I0812 11:26:47.616661  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:47.616487  471600 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0812 11:26:47.616716  471577 main.go:141] libmachine: (addons-800382) DBG | created network xml: 
	I0812 11:26:47.616734  471577 main.go:141] libmachine: (addons-800382) DBG | <network>
	I0812 11:26:47.616756  471577 main.go:141] libmachine: (addons-800382) DBG |   <name>mk-addons-800382</name>
	I0812 11:26:47.616767  471577 main.go:141] libmachine: (addons-800382) DBG |   <dns enable='no'/>
	I0812 11:26:47.616773  471577 main.go:141] libmachine: (addons-800382) DBG |   
	I0812 11:26:47.616779  471577 main.go:141] libmachine: (addons-800382) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 11:26:47.616789  471577 main.go:141] libmachine: (addons-800382) DBG |     <dhcp>
	I0812 11:26:47.616798  471577 main.go:141] libmachine: (addons-800382) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 11:26:47.616856  471577 main.go:141] libmachine: (addons-800382) DBG |     </dhcp>
	I0812 11:26:47.616885  471577 main.go:141] libmachine: (addons-800382) DBG |   </ip>
	I0812 11:26:47.616900  471577 main.go:141] libmachine: (addons-800382) DBG |   
	I0812 11:26:47.616912  471577 main.go:141] libmachine: (addons-800382) DBG | </network>
	I0812 11:26:47.616927  471577 main.go:141] libmachine: (addons-800382) DBG | 
	I0812 11:26:47.622629  471577 main.go:141] libmachine: (addons-800382) DBG | trying to create private KVM network mk-addons-800382 192.168.39.0/24...
	I0812 11:26:47.695776  471577 main.go:141] libmachine: (addons-800382) DBG | private KVM network mk-addons-800382 192.168.39.0/24 created
	I0812 11:26:47.695817  471577 main.go:141] libmachine: (addons-800382) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382 ...
	I0812 11:26:47.695834  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:47.695741  471600 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:26:47.695892  471577 main.go:141] libmachine: (addons-800382) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 11:26:47.695946  471577 main.go:141] libmachine: (addons-800382) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 11:26:48.008956  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:48.008790  471600 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa...
	I0812 11:26:48.106898  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:48.106711  471600 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/addons-800382.rawdisk...
	I0812 11:26:48.106932  471577 main.go:141] libmachine: (addons-800382) DBG | Writing magic tar header
	I0812 11:26:48.106948  471577 main.go:141] libmachine: (addons-800382) DBG | Writing SSH key tar header
	I0812 11:26:48.106961  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:48.106845  471600 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382 ...
	I0812 11:26:48.106979  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382
	I0812 11:26:48.106990  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382 (perms=drwx------)
	I0812 11:26:48.106997  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 11:26:48.107006  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:26:48.107013  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 11:26:48.107019  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 11:26:48.107030  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 11:26:48.107038  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home/jenkins
	I0812 11:26:48.107055  471577 main.go:141] libmachine: (addons-800382) DBG | Checking permissions on dir: /home
	I0812 11:26:48.107063  471577 main.go:141] libmachine: (addons-800382) DBG | Skipping /home - not owner
	I0812 11:26:48.107074  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 11:26:48.107083  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 11:26:48.107092  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 11:26:48.107099  471577 main.go:141] libmachine: (addons-800382) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 11:26:48.107104  471577 main.go:141] libmachine: (addons-800382) Creating domain...
	I0812 11:26:48.108382  471577 main.go:141] libmachine: (addons-800382) define libvirt domain using xml: 
	I0812 11:26:48.108407  471577 main.go:141] libmachine: (addons-800382) <domain type='kvm'>
	I0812 11:26:48.108417  471577 main.go:141] libmachine: (addons-800382)   <name>addons-800382</name>
	I0812 11:26:48.108425  471577 main.go:141] libmachine: (addons-800382)   <memory unit='MiB'>4000</memory>
	I0812 11:26:48.108433  471577 main.go:141] libmachine: (addons-800382)   <vcpu>2</vcpu>
	I0812 11:26:48.108444  471577 main.go:141] libmachine: (addons-800382)   <features>
	I0812 11:26:48.108453  471577 main.go:141] libmachine: (addons-800382)     <acpi/>
	I0812 11:26:48.108459  471577 main.go:141] libmachine: (addons-800382)     <apic/>
	I0812 11:26:48.108467  471577 main.go:141] libmachine: (addons-800382)     <pae/>
	I0812 11:26:48.108475  471577 main.go:141] libmachine: (addons-800382)     
	I0812 11:26:48.108482  471577 main.go:141] libmachine: (addons-800382)   </features>
	I0812 11:26:48.108491  471577 main.go:141] libmachine: (addons-800382)   <cpu mode='host-passthrough'>
	I0812 11:26:48.108500  471577 main.go:141] libmachine: (addons-800382)   
	I0812 11:26:48.108513  471577 main.go:141] libmachine: (addons-800382)   </cpu>
	I0812 11:26:48.108544  471577 main.go:141] libmachine: (addons-800382)   <os>
	I0812 11:26:48.108574  471577 main.go:141] libmachine: (addons-800382)     <type>hvm</type>
	I0812 11:26:48.108584  471577 main.go:141] libmachine: (addons-800382)     <boot dev='cdrom'/>
	I0812 11:26:48.108596  471577 main.go:141] libmachine: (addons-800382)     <boot dev='hd'/>
	I0812 11:26:48.108603  471577 main.go:141] libmachine: (addons-800382)     <bootmenu enable='no'/>
	I0812 11:26:48.108607  471577 main.go:141] libmachine: (addons-800382)   </os>
	I0812 11:26:48.108613  471577 main.go:141] libmachine: (addons-800382)   <devices>
	I0812 11:26:48.108620  471577 main.go:141] libmachine: (addons-800382)     <disk type='file' device='cdrom'>
	I0812 11:26:48.108648  471577 main.go:141] libmachine: (addons-800382)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/boot2docker.iso'/>
	I0812 11:26:48.108665  471577 main.go:141] libmachine: (addons-800382)       <target dev='hdc' bus='scsi'/>
	I0812 11:26:48.108675  471577 main.go:141] libmachine: (addons-800382)       <readonly/>
	I0812 11:26:48.108682  471577 main.go:141] libmachine: (addons-800382)     </disk>
	I0812 11:26:48.108692  471577 main.go:141] libmachine: (addons-800382)     <disk type='file' device='disk'>
	I0812 11:26:48.108702  471577 main.go:141] libmachine: (addons-800382)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 11:26:48.108711  471577 main.go:141] libmachine: (addons-800382)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/addons-800382.rawdisk'/>
	I0812 11:26:48.108717  471577 main.go:141] libmachine: (addons-800382)       <target dev='hda' bus='virtio'/>
	I0812 11:26:48.108744  471577 main.go:141] libmachine: (addons-800382)     </disk>
	I0812 11:26:48.108763  471577 main.go:141] libmachine: (addons-800382)     <interface type='network'>
	I0812 11:26:48.108776  471577 main.go:141] libmachine: (addons-800382)       <source network='mk-addons-800382'/>
	I0812 11:26:48.108787  471577 main.go:141] libmachine: (addons-800382)       <model type='virtio'/>
	I0812 11:26:48.108796  471577 main.go:141] libmachine: (addons-800382)     </interface>
	I0812 11:26:48.108804  471577 main.go:141] libmachine: (addons-800382)     <interface type='network'>
	I0812 11:26:48.108811  471577 main.go:141] libmachine: (addons-800382)       <source network='default'/>
	I0812 11:26:48.108821  471577 main.go:141] libmachine: (addons-800382)       <model type='virtio'/>
	I0812 11:26:48.108850  471577 main.go:141] libmachine: (addons-800382)     </interface>
	I0812 11:26:48.108872  471577 main.go:141] libmachine: (addons-800382)     <serial type='pty'>
	I0812 11:26:48.108880  471577 main.go:141] libmachine: (addons-800382)       <target port='0'/>
	I0812 11:26:48.108889  471577 main.go:141] libmachine: (addons-800382)     </serial>
	I0812 11:26:48.108917  471577 main.go:141] libmachine: (addons-800382)     <console type='pty'>
	I0812 11:26:48.108940  471577 main.go:141] libmachine: (addons-800382)       <target type='serial' port='0'/>
	I0812 11:26:48.108954  471577 main.go:141] libmachine: (addons-800382)     </console>
	I0812 11:26:48.108965  471577 main.go:141] libmachine: (addons-800382)     <rng model='virtio'>
	I0812 11:26:48.108977  471577 main.go:141] libmachine: (addons-800382)       <backend model='random'>/dev/random</backend>
	I0812 11:26:48.109003  471577 main.go:141] libmachine: (addons-800382)     </rng>
	I0812 11:26:48.109013  471577 main.go:141] libmachine: (addons-800382)     
	I0812 11:26:48.109027  471577 main.go:141] libmachine: (addons-800382)     
	I0812 11:26:48.109045  471577 main.go:141] libmachine: (addons-800382)   </devices>
	I0812 11:26:48.109058  471577 main.go:141] libmachine: (addons-800382) </domain>
	I0812 11:26:48.109070  471577 main.go:141] libmachine: (addons-800382) 
	I0812 11:26:48.113620  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:e9:cf:ae in network default
	I0812 11:26:48.114196  471577 main.go:141] libmachine: (addons-800382) Ensuring networks are active...
	I0812 11:26:48.114212  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:48.114967  471577 main.go:141] libmachine: (addons-800382) Ensuring network default is active
	I0812 11:26:48.115352  471577 main.go:141] libmachine: (addons-800382) Ensuring network mk-addons-800382 is active
	I0812 11:26:48.115949  471577 main.go:141] libmachine: (addons-800382) Getting domain xml...
	I0812 11:26:48.116800  471577 main.go:141] libmachine: (addons-800382) Creating domain...
	I0812 11:26:49.323218  471577 main.go:141] libmachine: (addons-800382) Waiting to get IP...
	I0812 11:26:49.324192  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:49.324723  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:49.324749  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:49.324638  471600 retry.go:31] will retry after 218.645464ms: waiting for machine to come up
	I0812 11:26:49.545664  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:49.546181  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:49.546203  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:49.546140  471600 retry.go:31] will retry after 266.428218ms: waiting for machine to come up
	I0812 11:26:49.814764  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:49.815154  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:49.815184  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:49.815107  471600 retry.go:31] will retry after 348.036472ms: waiting for machine to come up
	I0812 11:26:50.164339  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:50.164826  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:50.164853  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:50.164754  471600 retry.go:31] will retry after 575.539449ms: waiting for machine to come up
	I0812 11:26:50.741534  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:50.741965  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:50.742000  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:50.741899  471600 retry.go:31] will retry after 579.989755ms: waiting for machine to come up
	I0812 11:26:51.323762  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:51.324301  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:51.324327  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:51.324253  471600 retry.go:31] will retry after 952.570549ms: waiting for machine to come up
	I0812 11:26:52.278167  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:52.278573  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:52.278597  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:52.278544  471600 retry.go:31] will retry after 1.130615211s: waiting for machine to come up
	I0812 11:26:53.410925  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:53.411381  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:53.411412  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:53.411336  471600 retry.go:31] will retry after 1.090217623s: waiting for machine to come up
	I0812 11:26:54.503981  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:54.504601  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:54.504664  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:54.504562  471600 retry.go:31] will retry after 1.474747334s: waiting for machine to come up
	I0812 11:26:55.980985  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:55.981557  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:55.981588  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:55.981478  471600 retry.go:31] will retry after 1.460708111s: waiting for machine to come up
	I0812 11:26:57.444696  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:26:57.445321  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:26:57.445357  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:26:57.445271  471600 retry.go:31] will retry after 2.684746861s: waiting for machine to come up
	I0812 11:27:00.133272  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:00.133778  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:27:00.133805  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:27:00.133748  471600 retry.go:31] will retry after 3.139038013s: waiting for machine to come up
	I0812 11:27:03.274534  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:03.274939  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find current IP address of domain addons-800382 in network mk-addons-800382
	I0812 11:27:03.275010  471577 main.go:141] libmachine: (addons-800382) DBG | I0812 11:27:03.274923  471600 retry.go:31] will retry after 4.384739092s: waiting for machine to come up
	I0812 11:27:07.661447  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.661957  471577 main.go:141] libmachine: (addons-800382) Found IP for machine: 192.168.39.168
	I0812 11:27:07.662009  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has current primary IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.662020  471577 main.go:141] libmachine: (addons-800382) Reserving static IP address...
	I0812 11:27:07.662413  471577 main.go:141] libmachine: (addons-800382) DBG | unable to find host DHCP lease matching {name: "addons-800382", mac: "52:54:00:6a:1b:29", ip: "192.168.39.168"} in network mk-addons-800382
	I0812 11:27:07.740636  471577 main.go:141] libmachine: (addons-800382) DBG | Getting to WaitForSSH function...
	I0812 11:27:07.740668  471577 main.go:141] libmachine: (addons-800382) Reserved static IP address: 192.168.39.168
	I0812 11:27:07.740727  471577 main.go:141] libmachine: (addons-800382) Waiting for SSH to be available...
	I0812 11:27:07.743791  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.744269  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:07.744301  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.744440  471577 main.go:141] libmachine: (addons-800382) DBG | Using SSH client type: external
	I0812 11:27:07.744466  471577 main.go:141] libmachine: (addons-800382) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa (-rw-------)
	I0812 11:27:07.744614  471577 main.go:141] libmachine: (addons-800382) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 11:27:07.744644  471577 main.go:141] libmachine: (addons-800382) DBG | About to run SSH command:
	I0812 11:27:07.744662  471577 main.go:141] libmachine: (addons-800382) DBG | exit 0
	I0812 11:27:07.873446  471577 main.go:141] libmachine: (addons-800382) DBG | SSH cmd err, output: <nil>: 
	I0812 11:27:07.873731  471577 main.go:141] libmachine: (addons-800382) KVM machine creation complete!
	I0812 11:27:07.874077  471577 main.go:141] libmachine: (addons-800382) Calling .GetConfigRaw
	I0812 11:27:07.874655  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:07.874879  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:07.875052  471577 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 11:27:07.875069  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:07.876362  471577 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 11:27:07.876377  471577 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 11:27:07.876383  471577 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 11:27:07.876389  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:07.878898  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.879343  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:07.879371  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.879564  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:07.879741  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:07.879861  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:07.880012  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:07.880294  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:07.880562  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:07.880580  471577 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 11:27:07.992927  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:27:07.992953  471577 main.go:141] libmachine: Detecting the provisioner...
	I0812 11:27:07.992961  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:07.996144  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.996658  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:07.996682  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:07.996856  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:07.997125  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:07.997275  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:07.997419  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:07.997567  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:07.997773  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:07.997787  471577 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 11:27:08.110035  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 11:27:08.110134  471577 main.go:141] libmachine: found compatible host: buildroot
	I0812 11:27:08.110147  471577 main.go:141] libmachine: Provisioning with buildroot...
	I0812 11:27:08.110156  471577 main.go:141] libmachine: (addons-800382) Calling .GetMachineName
	I0812 11:27:08.110636  471577 buildroot.go:166] provisioning hostname "addons-800382"
	I0812 11:27:08.110674  471577 main.go:141] libmachine: (addons-800382) Calling .GetMachineName
	I0812 11:27:08.110930  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.113673  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.114089  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.114117  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.114251  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:08.114494  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.114680  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.114841  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:08.115035  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:08.115207  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:08.115219  471577 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-800382 && echo "addons-800382" | sudo tee /etc/hostname
	I0812 11:27:08.240949  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-800382
	
	I0812 11:27:08.240981  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.244562  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.244936  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.244997  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.245182  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:08.245471  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.245640  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.245788  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:08.245945  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:08.246128  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:08.246145  471577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-800382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-800382/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-800382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 11:27:08.366558  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 11:27:08.366595  471577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 11:27:08.366644  471577 buildroot.go:174] setting up certificates
	I0812 11:27:08.366658  471577 provision.go:84] configureAuth start
	I0812 11:27:08.366672  471577 main.go:141] libmachine: (addons-800382) Calling .GetMachineName
	I0812 11:27:08.367039  471577 main.go:141] libmachine: (addons-800382) Calling .GetIP
	I0812 11:27:08.369714  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.370042  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.370074  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.370196  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.372568  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.372895  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.372922  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.373119  471577 provision.go:143] copyHostCerts
	I0812 11:27:08.373210  471577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 11:27:08.373346  471577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 11:27:08.373421  471577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 11:27:08.373482  471577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.addons-800382 san=[127.0.0.1 192.168.39.168 addons-800382 localhost minikube]
	I0812 11:27:08.475391  471577 provision.go:177] copyRemoteCerts
	I0812 11:27:08.475463  471577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 11:27:08.475491  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.478295  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.478591  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.478612  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.478813  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:08.479035  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.479195  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:08.479363  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:08.564163  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 11:27:08.588528  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 11:27:08.611603  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 11:27:08.635742  471577 provision.go:87] duration metric: took 269.062707ms to configureAuth
	I0812 11:27:08.635774  471577 buildroot.go:189] setting minikube options for container-runtime
	I0812 11:27:08.635955  471577 config.go:182] Loaded profile config "addons-800382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:27:08.636037  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.638682  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.638999  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.639040  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.639191  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:08.639395  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.639591  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.639725  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:08.639899  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:08.640135  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:08.640157  471577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 11:27:08.923373  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 11:27:08.923412  471577 main.go:141] libmachine: Checking connection to Docker...
	I0812 11:27:08.923454  471577 main.go:141] libmachine: (addons-800382) Calling .GetURL
	I0812 11:27:08.924977  471577 main.go:141] libmachine: (addons-800382) DBG | Using libvirt version 6000000
	I0812 11:27:08.927456  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.927765  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.927790  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.928000  471577 main.go:141] libmachine: Docker is up and running!
	I0812 11:27:08.928018  471577 main.go:141] libmachine: Reticulating splines...
	I0812 11:27:08.928030  471577 client.go:171] duration metric: took 21.667945491s to LocalClient.Create
	I0812 11:27:08.928058  471577 start.go:167] duration metric: took 21.668046745s to libmachine.API.Create "addons-800382"
	I0812 11:27:08.928080  471577 start.go:293] postStartSetup for "addons-800382" (driver="kvm2")
	I0812 11:27:08.928095  471577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 11:27:08.928113  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:08.928379  471577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 11:27:08.928411  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:08.930730  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.931084  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:08.931111  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:08.931262  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:08.931460  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:08.931611  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:08.931803  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:09.021753  471577 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 11:27:09.026367  471577 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 11:27:09.026395  471577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 11:27:09.026479  471577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 11:27:09.026508  471577 start.go:296] duration metric: took 98.41797ms for postStartSetup
	I0812 11:27:09.026546  471577 main.go:141] libmachine: (addons-800382) Calling .GetConfigRaw
	I0812 11:27:09.027189  471577 main.go:141] libmachine: (addons-800382) Calling .GetIP
	I0812 11:27:09.030023  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.030389  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:09.030421  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.030680  471577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/config.json ...
	I0812 11:27:09.030863  471577 start.go:128] duration metric: took 21.78990926s to createHost
	I0812 11:27:09.030887  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:09.033015  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.033389  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:09.033412  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.033552  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:09.033787  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:09.033988  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:09.034149  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:09.034315  471577 main.go:141] libmachine: Using SSH client type: native
	I0812 11:27:09.034489  471577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0812 11:27:09.034500  471577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 11:27:09.145981  471577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723462029.115312124
	
	I0812 11:27:09.146008  471577 fix.go:216] guest clock: 1723462029.115312124
	I0812 11:27:09.146015  471577 fix.go:229] Guest: 2024-08-12 11:27:09.115312124 +0000 UTC Remote: 2024-08-12 11:27:09.030874799 +0000 UTC m=+21.896254704 (delta=84.437325ms)
	I0812 11:27:09.146057  471577 fix.go:200] guest clock delta is within tolerance: 84.437325ms
	I0812 11:27:09.146065  471577 start.go:83] releasing machines lock for "addons-800382", held for 21.905188257s
	I0812 11:27:09.146090  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:09.146400  471577 main.go:141] libmachine: (addons-800382) Calling .GetIP
	I0812 11:27:09.149141  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.149476  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:09.149519  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.149695  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:09.150224  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:09.150385  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:09.150495  471577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 11:27:09.150543  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:09.150557  471577 ssh_runner.go:195] Run: cat /version.json
	I0812 11:27:09.150581  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:09.153543  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.153570  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.153952  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:09.154036  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.154108  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:09.154125  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:09.154141  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:09.154290  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:09.154324  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:09.154567  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:09.154609  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:09.154737  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:09.154796  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:09.154895  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:09.234347  471577 ssh_runner.go:195] Run: systemctl --version
	I0812 11:27:09.259208  471577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 11:27:09.423456  471577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 11:27:09.430384  471577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 11:27:09.430500  471577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 11:27:09.446126  471577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 11:27:09.446155  471577 start.go:495] detecting cgroup driver to use...
	I0812 11:27:09.446256  471577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 11:27:09.463420  471577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 11:27:09.477204  471577 docker.go:217] disabling cri-docker service (if available) ...
	I0812 11:27:09.477300  471577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 11:27:09.490544  471577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 11:27:09.504339  471577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 11:27:09.627931  471577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 11:27:09.767609  471577 docker.go:233] disabling docker service ...
	I0812 11:27:09.767711  471577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 11:27:09.782271  471577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 11:27:09.794666  471577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 11:27:09.929094  471577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 11:27:10.060842  471577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 11:27:10.074398  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 11:27:10.092438  471577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 11:27:10.092533  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.103140  471577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 11:27:10.103225  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.114253  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.127032  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.139553  471577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 11:27:10.152441  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.164920  471577 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.184967  471577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 11:27:10.195876  471577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 11:27:10.205914  471577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 11:27:10.205977  471577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 11:27:10.219408  471577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 11:27:10.229354  471577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:27:10.359497  471577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 11:27:10.499054  471577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 11:27:10.499160  471577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 11:27:10.504185  471577 start.go:563] Will wait 60s for crictl version
	I0812 11:27:10.504266  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:27:10.508403  471577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 11:27:10.550125  471577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 11:27:10.550267  471577 ssh_runner.go:195] Run: crio --version
	I0812 11:27:10.580916  471577 ssh_runner.go:195] Run: crio --version
	I0812 11:27:10.610177  471577 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 11:27:10.611616  471577 main.go:141] libmachine: (addons-800382) Calling .GetIP
	I0812 11:27:10.614269  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:10.614671  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:10.614691  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:10.614910  471577 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 11:27:10.619473  471577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:27:10.632561  471577 kubeadm.go:883] updating cluster {Name:addons-800382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-800382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 11:27:10.632699  471577 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:27:10.632765  471577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:27:10.675240  471577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 11:27:10.675319  471577 ssh_runner.go:195] Run: which lz4
	I0812 11:27:10.679517  471577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 11:27:10.683990  471577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 11:27:10.684018  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 11:27:12.039356  471577 crio.go:462] duration metric: took 1.359869268s to copy over tarball
	I0812 11:27:12.039430  471577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 11:27:14.328479  471577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.289004004s)
	I0812 11:27:14.328517  471577 crio.go:469] duration metric: took 2.289132201s to extract the tarball
	I0812 11:27:14.328528  471577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 11:27:14.367290  471577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 11:27:14.409669  471577 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 11:27:14.409700  471577 cache_images.go:84] Images are preloaded, skipping loading
	I0812 11:27:14.409712  471577 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.30.3 crio true true} ...
	I0812 11:27:14.409925  471577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-800382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-800382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 11:27:14.410059  471577 ssh_runner.go:195] Run: crio config
	I0812 11:27:14.461059  471577 cni.go:84] Creating CNI manager for ""
	I0812 11:27:14.461092  471577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:27:14.461106  471577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 11:27:14.461139  471577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-800382 NodeName:addons-800382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 11:27:14.461277  471577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-800382"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 11:27:14.461344  471577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 11:27:14.471566  471577 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 11:27:14.471656  471577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 11:27:14.481047  471577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 11:27:14.497813  471577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 11:27:14.514163  471577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0812 11:27:14.530212  471577 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0812 11:27:14.534083  471577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 11:27:14.546365  471577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:27:14.679976  471577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:27:14.698142  471577 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382 for IP: 192.168.39.168
	I0812 11:27:14.698171  471577 certs.go:194] generating shared ca certs ...
	I0812 11:27:14.698190  471577 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:14.698355  471577 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 11:27:14.828314  471577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt ...
	I0812 11:27:14.828350  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt: {Name:mkcdd29bc7792ddde277b1ba7985a3bdb3fb94d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:14.828554  471577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key ...
	I0812 11:27:14.828570  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key: {Name:mke971cf9eb60b2a81601a2feab3aebfdb562c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:14.828670  471577 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 11:27:14.944619  471577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt ...
	I0812 11:27:14.944654  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt: {Name:mk49790161b94378683ac372a3f780608b7e9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:14.944860  471577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key ...
	I0812 11:27:14.944876  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key: {Name:mkcbab2b1b39171bb806e5dba78cedcb913b509f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:14.944972  471577 certs.go:256] generating profile certs ...
	I0812 11:27:14.945038  471577 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.key
	I0812 11:27:14.945065  471577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.crt with IP's: []
	I0812 11:27:15.056196  471577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.crt ...
	I0812 11:27:15.056231  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.crt: {Name:mk8fddecff921df813c864e14837368ffd293070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.056432  471577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.key ...
	I0812 11:27:15.056451  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/client.key: {Name:mk504cbbaa508ca2abca4048de9b8f438b7ec376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.056562  471577 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key.81feeb41
	I0812 11:27:15.056589  471577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt.81feeb41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168]
	I0812 11:27:15.156982  471577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt.81feeb41 ...
	I0812 11:27:15.157018  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt.81feeb41: {Name:mkc988420ecbcfc7b455b918391f3e6ae18b2023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.157222  471577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key.81feeb41 ...
	I0812 11:27:15.157244  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key.81feeb41: {Name:mkc9860c13767722114d21f18709ba6fc2ffdece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.157345  471577 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt.81feeb41 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt
	I0812 11:27:15.157451  471577 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key.81feeb41 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key
	I0812 11:27:15.157503  471577 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.key
	I0812 11:27:15.157524  471577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.crt with IP's: []
	I0812 11:27:15.337341  471577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.crt ...
	I0812 11:27:15.337378  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.crt: {Name:mk5084556a6f1a100a68c183682a03a0869f821f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.337558  471577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.key ...
	I0812 11:27:15.337573  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.key: {Name:mk87c53bdd904c3b7cb1c1916dc6b68ebafc9829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:15.337754  471577 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 11:27:15.337793  471577 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 11:27:15.337819  471577 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 11:27:15.337843  471577 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 11:27:15.338560  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 11:27:15.365632  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 11:27:15.400347  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 11:27:15.430092  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 11:27:15.460841  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0812 11:27:15.485316  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 11:27:15.509932  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 11:27:15.534163  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/addons-800382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 11:27:15.558569  471577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 11:27:15.583263  471577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 11:27:15.600113  471577 ssh_runner.go:195] Run: openssl version
	I0812 11:27:15.606145  471577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 11:27:15.616818  471577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:15.621459  471577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:15.621532  471577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 11:27:15.627754  471577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 11:27:15.638736  471577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 11:27:15.643273  471577 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 11:27:15.643347  471577 kubeadm.go:392] StartCluster: {Name:addons-800382 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-800382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:27:15.643439  471577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 11:27:15.643516  471577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 11:27:15.679800  471577 cri.go:89] found id: ""
	I0812 11:27:15.679948  471577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 11:27:15.691214  471577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 11:27:15.701293  471577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 11:27:15.711129  471577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 11:27:15.711153  471577 kubeadm.go:157] found existing configuration files:
	
	I0812 11:27:15.711207  471577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 11:27:15.720638  471577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 11:27:15.720691  471577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 11:27:15.730308  471577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 11:27:15.739524  471577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 11:27:15.739584  471577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 11:27:15.749563  471577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 11:27:15.758988  471577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 11:27:15.759054  471577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 11:27:15.768853  471577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 11:27:15.777877  471577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 11:27:15.777940  471577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 11:27:15.787414  471577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 11:27:15.848137  471577 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 11:27:15.848204  471577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 11:27:15.985577  471577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 11:27:15.985751  471577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 11:27:15.985874  471577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 11:27:16.195549  471577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 11:27:16.283216  471577 out.go:204]   - Generating certificates and keys ...
	I0812 11:27:16.283369  471577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 11:27:16.283473  471577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 11:27:16.579996  471577 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 11:27:16.679847  471577 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 11:27:16.780774  471577 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 11:27:16.913267  471577 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 11:27:17.266191  471577 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 11:27:17.266349  471577 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-800382 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0812 11:27:17.385975  471577 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 11:27:17.386124  471577 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-800382 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0812 11:27:17.543122  471577 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 11:27:17.594374  471577 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 11:27:17.727920  471577 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 11:27:17.727998  471577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 11:27:17.832535  471577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 11:27:18.094404  471577 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 11:27:18.396072  471577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 11:27:18.725429  471577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 11:27:18.815317  471577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 11:27:18.815898  471577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 11:27:18.818439  471577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 11:27:18.820402  471577 out.go:204]   - Booting up control plane ...
	I0812 11:27:18.820558  471577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 11:27:18.821420  471577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 11:27:18.822358  471577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 11:27:18.843083  471577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 11:27:18.843974  471577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 11:27:18.844054  471577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 11:27:18.965154  471577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 11:27:18.965304  471577 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 11:27:19.466776  471577 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.008384ms
	I0812 11:27:19.466904  471577 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 11:27:24.466813  471577 kubeadm.go:310] [api-check] The API server is healthy after 5.002237966s
	I0812 11:27:24.480894  471577 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 11:27:24.492720  471577 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 11:27:24.522109  471577 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 11:27:24.522354  471577 kubeadm.go:310] [mark-control-plane] Marking the node addons-800382 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 11:27:24.533437  471577 kubeadm.go:310] [bootstrap-token] Using token: hmjz79.5lq6jkexq29entwf
	I0812 11:27:24.534899  471577 out.go:204]   - Configuring RBAC rules ...
	I0812 11:27:24.535058  471577 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 11:27:24.541926  471577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 11:27:24.548823  471577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 11:27:24.553656  471577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 11:27:24.556743  471577 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 11:27:24.559810  471577 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 11:27:24.873320  471577 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 11:27:25.305770  471577 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 11:27:25.871406  471577 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 11:27:25.872352  471577 kubeadm.go:310] 
	I0812 11:27:25.872473  471577 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 11:27:25.872497  471577 kubeadm.go:310] 
	I0812 11:27:25.872588  471577 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 11:27:25.872600  471577 kubeadm.go:310] 
	I0812 11:27:25.872639  471577 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 11:27:25.872751  471577 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 11:27:25.872843  471577 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 11:27:25.872855  471577 kubeadm.go:310] 
	I0812 11:27:25.872926  471577 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 11:27:25.872936  471577 kubeadm.go:310] 
	I0812 11:27:25.872999  471577 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 11:27:25.873008  471577 kubeadm.go:310] 
	I0812 11:27:25.873109  471577 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 11:27:25.873222  471577 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 11:27:25.873302  471577 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 11:27:25.873308  471577 kubeadm.go:310] 
	I0812 11:27:25.873406  471577 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 11:27:25.873488  471577 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 11:27:25.873495  471577 kubeadm.go:310] 
	I0812 11:27:25.873564  471577 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hmjz79.5lq6jkexq29entwf \
	I0812 11:27:25.873690  471577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f \
	I0812 11:27:25.873711  471577 kubeadm.go:310] 	--control-plane 
	I0812 11:27:25.873714  471577 kubeadm.go:310] 
	I0812 11:27:25.873794  471577 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 11:27:25.873807  471577 kubeadm.go:310] 
	I0812 11:27:25.873911  471577 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hmjz79.5lq6jkexq29entwf \
	I0812 11:27:25.874054  471577 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f 
	I0812 11:27:25.875073  471577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 11:27:25.875097  471577 cni.go:84] Creating CNI manager for ""
	I0812 11:27:25.875104  471577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:27:25.877020  471577 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 11:27:25.878416  471577 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 11:27:25.888826  471577 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 11:27:25.909043  471577 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 11:27:25.909148  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:25.909180  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-800382 minikube.k8s.io/updated_at=2024_08_12T11_27_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=addons-800382 minikube.k8s.io/primary=true
	I0812 11:27:25.932578  471577 ops.go:34] apiserver oom_adj: -16
	I0812 11:27:26.037626  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:26.538273  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:27.038352  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:27.537653  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:28.038633  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:28.538351  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:29.037728  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:29.538612  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:30.038002  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:30.537932  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:31.037895  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:31.537746  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:32.038064  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:32.537702  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:33.037775  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:33.538514  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:34.038531  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:34.537867  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:35.038664  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:35.537708  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:36.038696  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:36.538049  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:37.038050  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:37.538045  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:38.038200  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:38.538348  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:39.037770  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:39.538011  471577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 11:27:39.651912  471577 kubeadm.go:1113] duration metric: took 13.742857836s to wait for elevateKubeSystemPrivileges
	I0812 11:27:39.651953  471577 kubeadm.go:394] duration metric: took 24.0086131s to StartCluster
	I0812 11:27:39.651975  471577 settings.go:142] acquiring lock: {Name:mke9ed38a916e17fe99baccde568c442d70df1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:39.652128  471577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 11:27:39.652495  471577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:27:39.652731  471577 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 11:27:39.652766  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 11:27:39.652899  471577 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0812 11:27:39.652987  471577 config.go:182] Loaded profile config "addons-800382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:27:39.653033  471577 addons.go:69] Setting inspektor-gadget=true in profile "addons-800382"
	I0812 11:27:39.653050  471577 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-800382"
	I0812 11:27:39.653049  471577 addons.go:69] Setting helm-tiller=true in profile "addons-800382"
	I0812 11:27:39.653055  471577 addons.go:69] Setting gcp-auth=true in profile "addons-800382"
	I0812 11:27:39.653094  471577 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-800382"
	I0812 11:27:39.653033  471577 addons.go:69] Setting yakd=true in profile "addons-800382"
	I0812 11:27:39.653088  471577 addons.go:69] Setting ingress-dns=true in profile "addons-800382"
	I0812 11:27:39.653112  471577 addons.go:69] Setting registry=true in profile "addons-800382"
	I0812 11:27:39.653115  471577 addons.go:234] Setting addon yakd=true in "addons-800382"
	I0812 11:27:39.653111  471577 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-800382"
	I0812 11:27:39.653126  471577 addons.go:234] Setting addon ingress-dns=true in "addons-800382"
	I0812 11:27:39.653134  471577 addons.go:234] Setting addon registry=true in "addons-800382"
	I0812 11:27:39.653138  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653153  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653155  471577 mustload.go:65] Loading cluster: addons-800382
	I0812 11:27:39.653160  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653177  471577 addons.go:69] Setting cloud-spanner=true in profile "addons-800382"
	I0812 11:27:39.653045  471577 addons.go:69] Setting metrics-server=true in profile "addons-800382"
	I0812 11:27:39.653196  471577 addons.go:234] Setting addon cloud-spanner=true in "addons-800382"
	I0812 11:27:39.653204  471577 addons.go:234] Setting addon metrics-server=true in "addons-800382"
	I0812 11:27:39.653226  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653232  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653165  471577 addons.go:69] Setting storage-provisioner=true in profile "addons-800382"
	I0812 11:27:39.653302  471577 addons.go:234] Setting addon storage-provisioner=true in "addons-800382"
	I0812 11:27:39.653322  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653343  471577 config.go:182] Loaded profile config "addons-800382": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 11:27:39.653623  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653645  471577 addons.go:69] Setting volcano=true in profile "addons-800382"
	I0812 11:27:39.653661  471577 addons.go:69] Setting default-storageclass=true in profile "addons-800382"
	I0812 11:27:39.653678  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653677  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653173  471577 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-800382"
	I0812 11:27:39.653689  471577 addons.go:69] Setting volumesnapshots=true in profile "addons-800382"
	I0812 11:27:39.653716  471577 addons.go:234] Setting addon volumesnapshots=true in "addons-800382"
	I0812 11:27:39.653728  471577 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-800382"
	I0812 11:27:39.653732  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653744  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653095  471577 addons.go:234] Setting addon inspektor-gadget=true in "addons-800382"
	I0812 11:27:39.653748  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653150  471577 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-800382"
	I0812 11:27:39.653649  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653685  471577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-800382"
	I0812 11:27:39.653776  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653178  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653822  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653627  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653868  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653917  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.653942  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653921  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.653680  471577 addons.go:234] Setting addon volcano=true in "addons-800382"
	I0812 11:27:39.653647  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654095  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654105  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.654111  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.654071  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654165  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.654178  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654189  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.653043  471577 addons.go:69] Setting ingress=true in profile "addons-800382"
	I0812 11:27:39.654251  471577 addons.go:234] Setting addon ingress=true in "addons-800382"
	I0812 11:27:39.654280  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.654288  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.654302  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654306  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.654325  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.654340  471577 addons.go:234] Setting addon helm-tiller=true in "addons-800382"
	I0812 11:27:39.654370  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.654529  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.654550  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.661805  471577 out.go:177] * Verifying Kubernetes components...
	I0812 11:27:39.663853  471577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 11:27:39.675753  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0812 11:27:39.676003  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0812 11:27:39.676011  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
	I0812 11:27:39.676402  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.676506  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.676542  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.676657  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0812 11:27:39.677150  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.677158  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.677172  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.677178  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.677275  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
	I0812 11:27:39.677343  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.677367  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.677717  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.677738  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.677764  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.677823  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.677848  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.678327  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.678366  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.678391  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.678422  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.678594  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.678607  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.679073  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.681489  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.681520  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.681528  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.681552  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.681819  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.681868  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.681889  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.682069  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.682101  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.682225  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.682245  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.691619  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.691784  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.691817  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.692521  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.693175  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.693212  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.694763  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0812 11:27:39.695413  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.696188  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.696205  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.696548  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.697205  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.697230  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.702921  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0812 11:27:39.703842  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.704492  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.704511  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.704946  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.705539  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.705582  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.717370  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0812 11:27:39.718114  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.718861  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.718883  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.719300  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.719913  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.719957  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.721895  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
	I0812 11:27:39.722463  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.722943  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.722961  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.723384  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.723943  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.723986  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.724179  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0812 11:27:39.724704  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.724747  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42875
	I0812 11:27:39.725274  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.725295  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.725730  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.725791  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.726445  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.726486  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.726807  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.726827  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.727130  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0812 11:27:39.727223  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.727412  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.730416  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0812 11:27:39.730441  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.730532  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.730879  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.730965  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0812 11:27:39.731223  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.731236  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.731345  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.731364  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.731887  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.732127  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.732624  471577 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0812 11:27:39.733020  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.733693  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.733735  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.734045  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.734262  471577 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 11:27:39.734281  471577 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0812 11:27:39.734304  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.735529  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0812 11:27:39.735804  471577 out.go:177]   - Using image docker.io/registry:2.8.3
	I0812 11:27:39.736204  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.736847  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.736865  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.737418  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.737426  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.737648  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.738050  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.738334  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.738373  471577 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0812 11:27:39.738519  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.738721  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.738953  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.739470  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.739501  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.739752  471577 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0812 11:27:39.739770  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0812 11:27:39.739794  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.740595  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.741254  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0812 11:27:39.742493  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0812 11:27:39.743782  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.744352  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.744454  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.744720  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.744893  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.745078  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.745271  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.745767  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.745784  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.745869  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0812 11:27:39.746332  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.746649  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.748366  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0812 11:27:39.748578  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.748991  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.749042  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.749824  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0812 11:27:39.750323  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.750655  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0812 11:27:39.750945  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.750964  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.751078  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39391
	I0812 11:27:39.751348  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.752079  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.752124  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.752845  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.753200  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0812 11:27:39.753330  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44869
	I0812 11:27:39.753781  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.753906  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.753925  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.754315  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.754332  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.754721  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.754756  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.754926  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.754985  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.755755  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0812 11:27:39.756938  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.757307  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:39.757323  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:39.757513  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:39.757534  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:39.757544  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:39.757551  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:39.757782  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:39.757813  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:39.757821  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	W0812 11:27:39.757899  471577 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0812 11:27:39.758500  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0812 11:27:39.758509  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36359
	I0812 11:27:39.759273  471577 addons.go:234] Setting addon default-storageclass=true in "addons-800382"
	I0812 11:27:39.759324  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.759518  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.759664  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 11:27:39.759690  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 11:27:39.759722  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.759786  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.759728  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.760124  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.760141  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.760570  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.760796  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.761509  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0812 11:27:39.761523  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0812 11:27:39.761858  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.761924  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.762542  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.762558  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.762646  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.762665  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.763257  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.763384  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.763467  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.763562  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.763607  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.763868  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.764029  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.764052  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.764225  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.764353  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.764748  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.765009  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.765444  471577 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0812 11:27:39.765683  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.766095  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.766800  471577 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0812 11:27:39.766819  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0812 11:27:39.766837  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.767971  471577 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0812 11:27:39.768057  471577 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0812 11:27:39.769449  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 11:27:39.769471  471577 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 11:27:39.769490  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.769562  471577 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 11:27:39.769581  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0812 11:27:39.769598  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.770587  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.770813  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.770847  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.771033  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.771200  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.771345  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.771479  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.774065  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.775144  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.775431  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.775456  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.775697  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.775732  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.775921  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.775998  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.776168  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.776224  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.776312  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.776362  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.776465  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.776747  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.778919  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I0812 11:27:39.779300  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.779587  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0812 11:27:39.779832  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.779844  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.780036  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.780177  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.780345  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.780470  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.780484  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.780985  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.781706  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.781746  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.781956  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0812 11:27:39.782111  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.782547  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.782987  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.783003  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.783079  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0812 11:27:39.783419  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.783481  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.783692  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.783981  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.783998  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.784180  471577 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0812 11:27:39.784376  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.784593  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.785554  471577 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 11:27:39.785574  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0812 11:27:39.785591  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.785881  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.786566  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.788041  471577 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0812 11:27:39.788098  471577 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0812 11:27:39.788745  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.789320  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.789389  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.789533  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.789668  471577 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 11:27:39.789693  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0812 11:27:39.789712  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.789761  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.789770  471577 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0812 11:27:39.789778  471577 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0812 11:27:39.789798  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.789934  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.790066  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.793038  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I0812 11:27:39.793556  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.793822  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.793914  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.794142  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.794162  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.794231  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.794249  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.794461  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.794643  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.794697  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.795072  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.795131  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.795302  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.795641  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0812 11:27:39.795936  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.795959  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.796380  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.796762  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.796942  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.797095  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.797236  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.797510  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.798176  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.798194  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.799542  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0812 11:27:39.799885  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.799957  471577 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0812 11:27:39.800177  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0812 11:27:39.800605  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.800820  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.800837  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.800901  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.801241  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.801339  471577 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0812 11:27:39.801354  471577 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0812 11:27:39.801370  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.802241  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.802259  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.802426  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.802645  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.802843  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.803431  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.803509  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0812 11:27:39.803903  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.804002  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.805701  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.805968  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.805985  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.806055  471577 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-800382"
	I0812 11:27:39.806096  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:39.806297  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.806325  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.806520  471577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 11:27:39.806594  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.806623  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.806640  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.806701  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.806880  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.806903  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.807074  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.807216  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.808041  471577 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:27:39.808062  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 11:27:39.808108  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.808744  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.809259  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0812 11:27:39.809729  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.810447  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.810457  471577 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 11:27:39.810465  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.810837  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.811509  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.811546  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.811703  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.812208  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.812237  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.812385  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.812643  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.812805  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.812947  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.813143  471577 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0812 11:27:39.814570  471577 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 11:27:39.816144  471577 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 11:27:39.816163  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0812 11:27:39.816182  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.819486  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.819819  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.819844  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.820017  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.820244  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.820432  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.820592  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.826189  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0812 11:27:39.826726  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.827249  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.827272  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.827705  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.828283  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:39.828311  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:39.828822  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0812 11:27:39.853688  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.854281  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.854310  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.854679  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.854933  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.856721  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.856985  471577 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 11:27:39.857005  471577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 11:27:39.857026  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.860313  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.860850  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.860882  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.861013  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.861211  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.861404  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.861576  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:39.871523  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0812 11:27:39.872113  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:39.872651  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:39.872681  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:39.873044  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:39.873283  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:39.875107  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:39.877266  471577 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0812 11:27:39.878728  471577 out.go:177]   - Using image docker.io/busybox:stable
	I0812 11:27:39.880180  471577 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 11:27:39.880197  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0812 11:27:39.880220  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:39.883301  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.883809  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:39.883840  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:39.883980  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:39.884171  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:39.884355  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:39.884517  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:40.163369  471577 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0812 11:27:40.163398  471577 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0812 11:27:40.215557  471577 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0812 11:27:40.215580  471577 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0812 11:27:40.235455  471577 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0812 11:27:40.235482  471577 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0812 11:27:40.244605  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 11:27:40.249792  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0812 11:27:40.277238  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0812 11:27:40.314833  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0812 11:27:40.337914  471577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 11:27:40.337942  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0812 11:27:40.347694  471577 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0812 11:27:40.347737  471577 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0812 11:27:40.392542  471577 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0812 11:27:40.392587  471577 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0812 11:27:40.408629  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0812 11:27:40.438386  471577 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0812 11:27:40.438425  471577 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0812 11:27:40.441042  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 11:27:40.469962  471577 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0812 11:27:40.470003  471577 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0812 11:27:40.484229  471577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 11:27:40.484721  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 11:27:40.490354  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0812 11:27:40.490380  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0812 11:27:40.504083  471577 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0812 11:27:40.504118  471577 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0812 11:27:40.510205  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0812 11:27:40.533951  471577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 11:27:40.533980  471577 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0812 11:27:40.601336  471577 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 11:27:40.601372  471577 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0812 11:27:40.640807  471577 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0812 11:27:40.640830  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0812 11:27:40.644855  471577 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0812 11:27:40.644875  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0812 11:27:40.666782  471577 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0812 11:27:40.666814  471577 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0812 11:27:40.687305  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0812 11:27:40.687335  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0812 11:27:40.793908  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 11:27:40.820045  471577 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0812 11:27:40.820075  471577 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0812 11:27:40.825449  471577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:27:40.825477  471577 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0812 11:27:40.865923  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0812 11:27:40.865956  471577 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0812 11:27:40.885969  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0812 11:27:40.898767  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0812 11:27:40.985218  471577 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0812 11:27:40.985249  471577 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0812 11:27:40.989726  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0812 11:27:40.989764  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0812 11:27:41.043455  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 11:27:41.095081  471577 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 11:27:41.095114  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0812 11:27:41.254700  471577 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0812 11:27:41.254740  471577 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0812 11:27:41.337117  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0812 11:27:41.337151  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0812 11:27:41.457605  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 11:27:41.596088  471577 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0812 11:27:41.596118  471577 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0812 11:27:41.739016  471577 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0812 11:27:41.739054  471577 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0812 11:27:41.806219  471577 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0812 11:27:41.806248  471577 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0812 11:27:42.053814  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0812 11:27:42.053844  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0812 11:27:42.115736  471577 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 11:27:42.115782  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0812 11:27:42.475589  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0812 11:27:42.475623  471577 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0812 11:27:42.489827  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0812 11:27:42.838968  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0812 11:27:42.838995  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0812 11:27:43.367852  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0812 11:27:43.367908  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0812 11:27:43.766970  471577 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 11:27:43.767000  471577 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0812 11:27:44.104884  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 11:27:45.347465  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.102807804s)
	I0812 11:27:45.347538  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:45.347557  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:45.347935  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:45.347955  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:45.347968  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:45.347975  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:45.347979  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:45.348185  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:45.348201  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:46.813271  471577 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0812 11:27:46.813329  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:46.817347  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:46.817819  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:46.817872  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:46.818152  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:46.818385  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:46.818590  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:46.818762  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:47.120815  471577 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0812 11:27:47.170782  471577 addons.go:234] Setting addon gcp-auth=true in "addons-800382"
	I0812 11:27:47.170866  471577 host.go:66] Checking if "addons-800382" exists ...
	I0812 11:27:47.171359  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:47.171397  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:47.187717  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0812 11:27:47.188232  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:47.188815  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:47.188846  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:47.189291  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:47.189882  471577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 11:27:47.189915  471577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 11:27:47.206694  471577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I0812 11:27:47.207229  471577 main.go:141] libmachine: () Calling .GetVersion
	I0812 11:27:47.207830  471577 main.go:141] libmachine: Using API Version  1
	I0812 11:27:47.207863  471577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 11:27:47.208335  471577 main.go:141] libmachine: () Calling .GetMachineName
	I0812 11:27:47.208593  471577 main.go:141] libmachine: (addons-800382) Calling .GetState
	I0812 11:27:47.210293  471577 main.go:141] libmachine: (addons-800382) Calling .DriverName
	I0812 11:27:47.210549  471577 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0812 11:27:47.210582  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHHostname
	I0812 11:27:47.213862  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:47.214306  471577 main.go:141] libmachine: (addons-800382) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:1b:29", ip: ""} in network mk-addons-800382: {Iface:virbr1 ExpiryTime:2024-08-12 12:27:02 +0000 UTC Type:0 Mac:52:54:00:6a:1b:29 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-800382 Clientid:01:52:54:00:6a:1b:29}
	I0812 11:27:47.214342  471577 main.go:141] libmachine: (addons-800382) DBG | domain addons-800382 has defined IP address 192.168.39.168 and MAC address 52:54:00:6a:1b:29 in network mk-addons-800382
	I0812 11:27:47.214555  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHPort
	I0812 11:27:47.214769  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHKeyPath
	I0812 11:27:47.215054  471577 main.go:141] libmachine: (addons-800382) Calling .GetSSHUsername
	I0812 11:27:47.215234  471577 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/addons-800382/id_rsa Username:docker}
	I0812 11:27:48.416436  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.166596159s)
	I0812 11:27:48.416497  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416513  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.416520  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.139244682s)
	I0812 11:27:48.416587  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416604  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.416634  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.101768833s)
	I0812 11:27:48.416678  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416697  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.416739  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.008074343s)
	I0812 11:27:48.416769  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416787  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.416803  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.975731173s)
	I0812 11:27:48.416836  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416849  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.416873  471577 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.932619386s)
	I0812 11:27:48.416908  471577 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.932154454s)
	I0812 11:27:48.416929  471577 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 11:27:48.416940  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.906712138s)
	I0812 11:27:48.416957  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.416966  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417035  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.623099233s)
	I0812 11:27:48.417055  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417056  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.417064  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417114  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.417128  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.417137  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417147  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417154  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.417179  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.417187  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.417188  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.51838547s)
	I0812 11:27:48.417196  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417206  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417214  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417224  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417146  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.53114093s)
	I0812 11:27:48.417264  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417277  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417347  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.373858318s)
	I0812 11:27:48.417353  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.417369  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417379  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417388  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.417398  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.417407  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417441  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417498  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.959854591s)
	W0812 11:27:48.417546  471577 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 11:27:48.417571  471577 retry.go:31] will retry after 149.323174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0812 11:27:48.417658  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.927800482s)
	I0812 11:27:48.417678  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417687  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417760  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.417787  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.417794  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.417801  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417808  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.417865  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.417887  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.417893  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.417900  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.417907  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.418009  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.418040  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418048  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418057  471577 addons.go:475] Verifying addon ingress=true in "addons-800382"
	I0812 11:27:48.418155  471577 node_ready.go:35] waiting up to 6m0s for node "addons-800382" to be "Ready" ...
	I0812 11:27:48.418440  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.418465  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.418496  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418505  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418518  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.418526  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.418528  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418541  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418762  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418774  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418784  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.418792  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.418842  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.418867  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418874  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418882  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.418890  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.418931  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.418952  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.418959  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.418968  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.418977  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.419028  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.419066  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.419073  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.419082  471577 addons.go:475] Verifying addon metrics-server=true in "addons-800382"
	I0812 11:27:48.419345  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.419381  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.419389  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.419400  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.419408  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.422395  471577 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0812 11:27:48.422411  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.422657  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.422687  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.422694  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.422809  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.422818  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.422834  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.422845  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.422854  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.422970  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.423022  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.423036  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.423048  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.423057  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.423470  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.423504  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.423511  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.423774  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.423806  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.423831  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.423838  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.423850  471577 addons.go:475] Verifying addon registry=true in "addons-800382"
	I0812 11:27:48.423926  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.423938  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.424015  471577 out.go:177] * Verifying ingress addon...
	I0812 11:27:48.424061  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.424098  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.424855  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.426154  471577 out.go:177] * Verifying registry addon...
	I0812 11:27:48.426176  471577 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-800382 service yakd-dashboard -n yakd-dashboard
	
	I0812 11:27:48.427034  471577 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0812 11:27:48.428060  471577 node_ready.go:49] node "addons-800382" has status "Ready":"True"
	I0812 11:27:48.428082  471577 node_ready.go:38] duration metric: took 9.900192ms for node "addons-800382" to be "Ready" ...
	I0812 11:27:48.428094  471577 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:27:48.428569  471577 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0812 11:27:48.443332  471577 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0812 11:27:48.443359  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:48.466879  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.466901  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.467199  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:48.467252  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.467394  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	W0812 11:27:48.467534  471577 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0812 11:27:48.483139  471577 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c62l9" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.483881  471577 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0812 11:27:48.483900  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:48.489327  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:48.489352  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:48.489659  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:48.489677  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:48.522729  471577 pod_ready.go:92] pod "coredns-7db6d8ff4d-c62l9" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:48.522760  471577 pod_ready.go:81] duration metric: took 39.587057ms for pod "coredns-7db6d8ff4d-c62l9" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.522775  471577 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rbfjb" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.567598  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 11:27:48.581537  471577 pod_ready.go:92] pod "coredns-7db6d8ff4d-rbfjb" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:48.581565  471577 pod_ready.go:81] duration metric: took 58.781331ms for pod "coredns-7db6d8ff4d-rbfjb" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.581579  471577 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.596587  471577 pod_ready.go:92] pod "etcd-addons-800382" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:48.596619  471577 pod_ready.go:81] duration metric: took 15.031325ms for pod "etcd-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.596633  471577 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.651574  471577 pod_ready.go:92] pod "kube-apiserver-addons-800382" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:48.651603  471577 pod_ready.go:81] duration metric: took 54.961973ms for pod "kube-apiserver-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.651618  471577 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.823282  471577 pod_ready.go:92] pod "kube-controller-manager-addons-800382" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:48.823320  471577 pod_ready.go:81] duration metric: took 171.692998ms for pod "kube-controller-manager-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.823336  471577 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4c827" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:48.923491  471577 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-800382" context rescaled to 1 replicas
	I0812 11:27:48.942530  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:48.947966  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:49.222241  471577 pod_ready.go:92] pod "kube-proxy-4c827" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:49.222273  471577 pod_ready.go:81] duration metric: took 398.92663ms for pod "kube-proxy-4c827" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:49.222287  471577 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:49.451841  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:49.463650  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:49.625698  471577 pod_ready.go:92] pod "kube-scheduler-addons-800382" in "kube-system" namespace has status "Ready":"True"
	I0812 11:27:49.625732  471577 pod_ready.go:81] duration metric: took 403.436997ms for pod "kube-scheduler-addons-800382" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:49.625748  471577 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace to be "Ready" ...
	I0812 11:27:49.946796  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:49.962596  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:50.495578  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:50.496235  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:50.556182  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.451224338s)
	I0812 11:27:50.556209  471577 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.345627526s)
	I0812 11:27:50.556252  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:50.556268  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:50.556659  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:50.556685  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:50.556706  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:50.556716  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:50.557116  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:50.557129  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:50.557146  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:50.557160  471577 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-800382"
	I0812 11:27:50.558068  471577 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0812 11:27:50.559100  471577 out.go:177] * Verifying csi-hostpath-driver addon...
	I0812 11:27:50.560708  471577 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0812 11:27:50.561779  471577 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0812 11:27:50.561952  471577 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0812 11:27:50.561973  471577 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0812 11:27:50.608738  471577 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0812 11:27:50.608764  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:50.717992  471577 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0812 11:27:50.718028  471577 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0812 11:27:50.775205  471577 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 11:27:50.775247  471577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0812 11:27:50.788938  471577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.221288011s)
	I0812 11:27:50.789011  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:50.789027  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:50.789339  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:50.789391  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:50.789410  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:50.789425  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:50.789433  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:50.789674  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:50.789688  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:50.843122  471577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0812 11:27:50.931553  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:50.936588  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:51.068133  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:51.442928  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:51.443186  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:51.572571  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:51.616442  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:51.616479  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:51.616884  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:51.616938  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:51.616949  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:51.616958  471577 main.go:141] libmachine: Making call to close driver server
	I0812 11:27:51.616965  471577 main.go:141] libmachine: (addons-800382) Calling .Close
	I0812 11:27:51.617303  471577 main.go:141] libmachine: (addons-800382) DBG | Closing plugin on server side
	I0812 11:27:51.617334  471577 main.go:141] libmachine: Successfully made call to close driver server
	I0812 11:27:51.617353  471577 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 11:27:51.619466  471577 addons.go:475] Verifying addon gcp-auth=true in "addons-800382"
	I0812 11:27:51.621417  471577 out.go:177] * Verifying gcp-auth addon...
	I0812 11:27:51.623665  471577 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0812 11:27:51.650407  471577 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0812 11:27:51.650440  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:51.661623  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:27:51.931722  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:51.933640  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:52.068925  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:52.145074  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:52.433589  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:52.434793  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:52.568061  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:52.629660  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:52.932834  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:52.933818  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:53.069055  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:53.127887  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:53.431716  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:53.434107  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:53.567783  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:53.627951  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:53.931702  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:53.933625  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:54.068687  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:54.127538  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:54.133786  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:27:54.432122  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:54.433886  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:54.568021  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:54.628158  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:54.931762  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:54.933138  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:55.067672  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:55.129703  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:55.431723  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:55.434624  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:55.567791  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:55.628899  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:55.932593  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:55.935084  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:56.146911  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:56.152333  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:56.159092  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:27:56.432515  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:56.433708  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:56.567635  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:56.627677  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:56.934639  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:56.942789  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:57.069631  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:57.129696  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:57.434098  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:57.434842  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:57.567218  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:57.628659  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:57.934026  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:57.934800  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:58.067339  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:58.128773  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:58.432294  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:58.434084  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:58.568126  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:58.628342  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:58.635760  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:27:58.933452  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:58.933653  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:59.068004  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:59.127888  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:59.431591  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:59.434610  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:27:59.567465  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:27:59.628644  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:27:59.931633  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:27:59.933094  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:00.068089  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:00.129216  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:00.431608  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:00.434618  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:00.568061  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:00.629368  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:00.931908  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:00.934805  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:01.068102  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:01.127311  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:01.145421  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:01.431863  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:01.435534  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:01.569259  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:01.627176  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:01.931771  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:01.934747  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:02.068238  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:02.130227  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:02.433849  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:02.435518  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:02.567061  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:02.627882  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:02.931814  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:02.933666  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:03.067467  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:03.131149  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:03.432975  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:03.434241  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:03.567806  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:03.628250  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:03.632079  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:03.938702  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:03.939396  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:04.326743  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:04.327866  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:04.432675  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:04.435162  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:04.566873  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:04.626875  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:04.931588  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:04.933166  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:05.067005  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:05.127928  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:05.431842  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:05.434362  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:05.567552  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:05.627945  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:05.931797  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:05.933925  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:06.071238  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:06.127836  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:06.137022  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:06.777348  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:06.784935  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:06.787221  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:06.788697  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:06.931284  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:06.943564  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:07.067056  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:07.129042  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:07.431808  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:07.434577  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:07.567320  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:07.629238  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:07.931902  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:07.933055  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:08.067730  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:08.127451  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:08.431598  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:08.433014  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:08.568004  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:08.628351  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:08.631370  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:08.931677  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:08.932710  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:09.067366  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:09.129369  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:09.432336  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:09.433679  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:09.567694  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:09.627426  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:09.931829  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:09.933412  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:10.066643  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:10.128050  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:10.431686  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:10.433376  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:10.567487  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:10.633374  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:10.633777  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:10.931933  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:10.933362  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:11.067382  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:11.128539  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:11.433879  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:11.435181  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:11.567507  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:11.628450  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:11.931661  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:11.934690  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:12.067173  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:12.127916  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:12.437768  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:12.437881  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:12.568259  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:12.628150  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:12.634527  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:12.931794  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:12.933696  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:13.067562  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:13.127138  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:13.432365  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:13.436029  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:13.568014  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:13.627866  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:13.932332  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:13.934031  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:14.069798  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:14.127950  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:14.434838  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:14.438671  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:14.567969  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:14.628020  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:14.933453  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:14.933659  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:15.067964  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:15.139310  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:15.150175  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:15.431305  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:15.433490  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:15.568568  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:15.627858  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:15.932196  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:15.933961  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:16.067962  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:16.127658  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:16.433691  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:16.433693  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:16.568048  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:16.627405  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:16.934136  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:16.934638  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:17.067932  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:17.135625  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:17.432157  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:17.434771  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:17.568608  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:17.629154  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:17.631498  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:17.932504  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:17.933591  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:18.067516  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:18.129466  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:18.431898  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:18.433500  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:18.567175  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:18.628159  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:18.933866  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:18.933902  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:19.067490  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:19.127435  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:19.432050  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:19.435353  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:19.568089  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:19.627353  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:19.631529  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:19.934093  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:19.934381  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:20.068340  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:20.129656  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:20.432208  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:20.432916  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:20.567884  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:20.628240  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:20.933534  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:20.933590  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:21.069770  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:21.137292  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:21.432305  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:21.433847  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:21.567659  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:21.628201  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:21.631782  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:21.932469  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:21.933359  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:22.067284  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:22.128255  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:22.432776  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:22.434554  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:22.568150  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:22.627149  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:22.932574  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:22.935095  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:23.068210  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:23.127864  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:23.432043  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:23.433598  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:23.567622  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:23.627400  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:23.934428  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:23.934494  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:24.067479  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:24.128129  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:24.138262  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:24.697849  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:24.698878  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:24.699581  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:24.699854  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:24.934058  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:24.935720  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:25.067184  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:25.128334  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:25.431756  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:25.433531  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:25.567134  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:25.628239  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:25.931471  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:25.939320  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:26.071273  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:26.130518  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:26.141254  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:26.434241  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:26.435324  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:26.567213  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:26.630677  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:26.931128  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:26.933791  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:27.068020  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:27.133464  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:27.432409  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:27.433590  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:27.583673  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:27.627484  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:27.931498  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:27.938313  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:28.066880  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:28.129457  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:28.432626  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:28.434774  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:28.568193  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:28.627651  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:28.631654  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:28.933826  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:28.943342  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:29.071554  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:29.130430  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:29.431504  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:29.434392  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:29.570677  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:29.627213  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:29.933726  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:29.937035  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:30.068285  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:30.129229  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:30.432221  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:30.433160  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:30.569296  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:30.628013  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:30.932952  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:30.933543  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:31.067351  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:31.127948  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:31.133196  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:31.431118  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:31.434154  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:31.569040  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:31.628333  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:31.932015  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:31.934501  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:32.067327  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:32.128509  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:32.431960  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:32.434471  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:32.571261  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:32.628727  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:32.933821  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:32.936544  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:33.068208  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:33.129946  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:33.433717  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:33.436202  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:33.568011  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:33.629050  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:33.632393  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:33.947519  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:33.948284  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:34.068163  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:34.128208  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:34.833046  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:34.835217  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:34.835478  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:34.837823  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:34.932533  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:34.933764  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:35.071740  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:35.129739  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:35.431407  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:35.449666  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:35.567459  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:35.631369  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:35.633856  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:35.933017  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:35.936459  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:36.067172  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:36.130580  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:36.435529  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:36.437226  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:36.572350  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:36.628172  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:36.931918  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:36.933517  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:37.067365  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:37.129319  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:37.433107  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:37.435172  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 11:28:37.570651  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:37.627360  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:37.932447  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:37.933328  471577 kapi.go:107] duration metric: took 49.504755924s to wait for kubernetes.io/minikube-addons=registry ...
	I0812 11:28:38.069360  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:38.128877  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:38.132848  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:38.436627  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:38.567651  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:38.627216  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:38.931932  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:39.071677  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:39.130659  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:39.431868  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:39.567342  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:39.627853  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:39.931624  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:40.069275  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:40.128804  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:40.432479  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:40.567206  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:40.628214  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:40.632171  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:40.931110  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:41.068531  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:41.128875  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:41.432750  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:41.568282  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:41.627421  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:41.932574  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:42.068662  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:42.130431  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:42.431288  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:42.568406  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:42.628181  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:42.633347  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:42.931002  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:43.068202  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:43.128730  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:43.431859  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:43.567428  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:43.628169  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:43.933578  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:44.068859  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:44.131124  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:44.431683  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:44.567821  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:44.629498  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:44.932925  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:45.069419  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:45.141347  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:45.145515  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:45.433842  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:45.567482  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:45.634899  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:45.930861  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:46.067708  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:46.128099  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:46.431763  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:46.567212  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:46.627193  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:46.931663  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:47.067351  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:47.130281  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:47.431331  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:47.568405  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:47.628036  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:47.633784  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:47.931859  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:48.070331  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:48.130219  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:48.431205  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:48.567941  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:48.627252  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:48.930876  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:49.077678  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:49.132203  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:49.431391  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:49.568011  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:49.627535  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:49.934958  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:50.069792  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:50.127116  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:50.133508  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:50.437037  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:50.568050  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:50.627836  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:50.986372  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:51.079390  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:51.133124  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:51.431565  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:51.567674  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:51.628399  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:51.931408  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:52.068349  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:52.128962  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:52.134255  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:52.432346  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:52.568273  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:52.633509  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:52.933846  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:53.067086  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:53.140633  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:53.431233  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:53.568227  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:53.628559  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:53.931883  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:54.067397  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:54.128249  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:54.433598  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:54.568055  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:54.627809  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:54.632547  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:54.932031  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:55.068219  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:55.141223  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:55.431749  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:55.570796  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:55.628264  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:55.931936  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:56.066970  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:56.128790  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:56.432470  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:56.573582  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:56.627918  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:56.633282  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:56.949301  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:57.076353  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:57.130203  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:57.433378  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:57.568473  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:57.628122  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:57.932360  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:58.068267  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:58.130210  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:58.433132  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:58.569309  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:58.635462  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:58.636583  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:28:58.931938  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:59.068617  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:59.131571  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:59.433117  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:28:59.568432  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:28:59.631918  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:28:59.931339  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:00.068712  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:00.127988  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:00.432236  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:00.713578  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:00.717211  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:00.720314  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:00.931451  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:01.070004  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:01.127932  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:01.431604  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:01.567364  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:01.628018  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:01.932643  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:02.070734  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:02.128066  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:02.432078  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:02.567532  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:02.626927  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:02.932457  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:03.067642  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:03.131063  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:03.141947  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:03.433173  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:03.568734  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:03.632592  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:03.931960  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:04.066839  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:04.136591  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:04.431314  471577 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 11:29:04.569910  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:04.628820  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:04.932424  471577 kapi.go:107] duration metric: took 1m16.505385931s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0812 11:29:05.067086  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:05.128929  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:05.570238  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:05.630629  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:05.640638  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:06.068177  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:06.127434  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:06.568357  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:06.628296  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:07.067459  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:07.129463  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:07.567930  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:07.627470  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:08.067140  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:08.131370  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:08.136449  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:08.782420  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:08.785291  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:09.068769  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:09.130766  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0812 11:29:09.577043  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:09.628595  471577 kapi.go:107] duration metric: took 1m18.004927895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0812 11:29:09.630340  471577 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-800382 cluster.
	I0812 11:29:09.631691  471577 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0812 11:29:09.633040  471577 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0812 11:29:10.067167  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:10.567297  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:10.631622  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:11.068050  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:11.567360  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:12.066921  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:12.568245  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:12.642717  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:13.066761  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:13.568108  471577 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 11:29:14.068434  471577 kapi.go:107] duration metric: took 1m23.506651214s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0812 11:29:14.070232  471577 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, helm-tiller, inspektor-gadget, ingress-dns, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0812 11:29:14.071524  471577 addons.go:510] duration metric: took 1m34.418641869s for enable addons: enabled=[storage-provisioner cloud-spanner metrics-server helm-tiller inspektor-gadget ingress-dns nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0812 11:29:15.134189  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:17.631817  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:20.134270  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:22.634190  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:25.132730  471577 pod_ready.go:102] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"False"
	I0812 11:29:26.134171  471577 pod_ready.go:92] pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace has status "Ready":"True"
	I0812 11:29:26.134199  471577 pod_ready.go:81] duration metric: took 1m36.508441718s for pod "metrics-server-c59844bb4-7nmjb" in "kube-system" namespace to be "Ready" ...
	I0812 11:29:26.134211  471577 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-d8hbd" in "kube-system" namespace to be "Ready" ...
	I0812 11:29:26.139590  471577 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-d8hbd" in "kube-system" namespace has status "Ready":"True"
	I0812 11:29:26.139612  471577 pod_ready.go:81] duration metric: took 5.39484ms for pod "nvidia-device-plugin-daemonset-d8hbd" in "kube-system" namespace to be "Ready" ...
	I0812 11:29:26.139634  471577 pod_ready.go:38] duration metric: took 1m37.711523842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 11:29:26.139660  471577 api_server.go:52] waiting for apiserver process to appear ...
	I0812 11:29:26.139688  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:29:26.139741  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:29:26.187517  471577 cri.go:89] found id: "d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f"
	I0812 11:29:26.187541  471577 cri.go:89] found id: ""
	I0812 11:29:26.187551  471577 logs.go:276] 1 containers: [d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f]
	I0812 11:29:26.187603  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.192219  471577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:29:26.192283  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:29:26.235975  471577 cri.go:89] found id: "e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0"
	I0812 11:29:26.236001  471577 cri.go:89] found id: ""
	I0812 11:29:26.236012  471577 logs.go:276] 1 containers: [e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0]
	I0812 11:29:26.236076  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.241357  471577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:29:26.241425  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:29:26.287303  471577 cri.go:89] found id: "0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3"
	I0812 11:29:26.287339  471577 cri.go:89] found id: ""
	I0812 11:29:26.287349  471577 logs.go:276] 1 containers: [0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3]
	I0812 11:29:26.287405  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.292745  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:29:26.292816  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:29:26.336293  471577 cri.go:89] found id: "9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8"
	I0812 11:29:26.336320  471577 cri.go:89] found id: ""
	I0812 11:29:26.336328  471577 logs.go:276] 1 containers: [9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8]
	I0812 11:29:26.336384  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.340609  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:29:26.340671  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:29:26.382399  471577 cri.go:89] found id: "b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e"
	I0812 11:29:26.382422  471577 cri.go:89] found id: ""
	I0812 11:29:26.382430  471577 logs.go:276] 1 containers: [b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e]
	I0812 11:29:26.382482  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.387183  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:29:26.387247  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:29:26.431192  471577 cri.go:89] found id: "02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3"
	I0812 11:29:26.431212  471577 cri.go:89] found id: ""
	I0812 11:29:26.431220  471577 logs.go:276] 1 containers: [02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3]
	I0812 11:29:26.431272  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:26.435554  471577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:29:26.435622  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:29:26.474663  471577 cri.go:89] found id: ""
	I0812 11:29:26.474694  471577 logs.go:276] 0 containers: []
	W0812 11:29:26.474702  471577 logs.go:278] No container was found matching "kindnet"
	I0812 11:29:26.474720  471577 logs.go:123] Gathering logs for kube-proxy [b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e] ...
	I0812 11:29:26.474735  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e"
	I0812 11:29:26.521212  471577 logs.go:123] Gathering logs for kube-controller-manager [02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3] ...
	I0812 11:29:26.521247  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3"
	I0812 11:29:26.584656  471577 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:29:26.584703  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 11:29:27.575850  471577 logs.go:123] Gathering logs for kubelet ...
	I0812 11:29:27.575907  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0812 11:29:27.631974  471577 logs.go:138] Found kubelet problem: Aug 12 11:27:46 addons-800382 kubelet[1275]: W0812 11:27:46.461293    1275 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	W0812 11:29:27.632165  471577 logs.go:138] Found kubelet problem: Aug 12 11:27:46 addons-800382 kubelet[1275]: E0812 11:27:46.461344    1275 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	I0812 11:29:27.658766  471577 logs.go:123] Gathering logs for dmesg ...
	I0812 11:29:27.658800  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 11:29:27.674837  471577 logs.go:123] Gathering logs for kube-scheduler [9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8] ...
	I0812 11:29:27.674875  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8"
	I0812 11:29:27.729765  471577 logs.go:123] Gathering logs for coredns [0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3] ...
	I0812 11:29:27.729810  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3"
	I0812 11:29:27.778740  471577 logs.go:123] Gathering logs for container status ...
	I0812 11:29:27.778785  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:29:27.831972  471577 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:29:27.832006  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:29:28.039055  471577 logs.go:123] Gathering logs for kube-apiserver [d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f] ...
	I0812 11:29:28.039097  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f"
	I0812 11:29:28.087282  471577 logs.go:123] Gathering logs for etcd [e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0] ...
	I0812 11:29:28.087324  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0"
	I0812 11:29:28.142601  471577 out.go:304] Setting ErrFile to fd 2...
	I0812 11:29:28.142637  471577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0812 11:29:28.142718  471577 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0812 11:29:28.142733  471577 out.go:239]   Aug 12 11:27:46 addons-800382 kubelet[1275]: W0812 11:27:46.461293    1275 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	  Aug 12 11:27:46 addons-800382 kubelet[1275]: W0812 11:27:46.461293    1275 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	W0812 11:29:28.142742  471577 out.go:239]   Aug 12 11:27:46 addons-800382 kubelet[1275]: E0812 11:27:46.461344    1275 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	  Aug 12 11:27:46 addons-800382 kubelet[1275]: E0812 11:27:46.461344    1275 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	I0812 11:29:28.142756  471577 out.go:304] Setting ErrFile to fd 2...
	I0812 11:29:28.142766  471577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:29:38.143623  471577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 11:29:38.168109  471577 api_server.go:72] duration metric: took 1m58.515329394s to wait for apiserver process to appear ...
	I0812 11:29:38.168141  471577 api_server.go:88] waiting for apiserver healthz status ...
	I0812 11:29:38.168257  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 11:29:38.168338  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 11:29:38.214051  471577 cri.go:89] found id: "d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f"
	I0812 11:29:38.214087  471577 cri.go:89] found id: ""
	I0812 11:29:38.214099  471577 logs.go:276] 1 containers: [d63b71cf6636d3d95979ca58c6ea85700ae13a89bd2ec61a22997b635374577f]
	I0812 11:29:38.214169  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.219600  471577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 11:29:38.219720  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 11:29:38.269983  471577 cri.go:89] found id: "e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0"
	I0812 11:29:38.270013  471577 cri.go:89] found id: ""
	I0812 11:29:38.270023  471577 logs.go:276] 1 containers: [e9fe0da9a21879a3ae9565eec549ec54f825d75fe7f894421ff58523140c6fa0]
	I0812 11:29:38.270090  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.274922  471577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 11:29:38.274979  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 11:29:38.324027  471577 cri.go:89] found id: "0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3"
	I0812 11:29:38.324056  471577 cri.go:89] found id: ""
	I0812 11:29:38.324065  471577 logs.go:276] 1 containers: [0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3]
	I0812 11:29:38.324135  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.329176  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 11:29:38.329252  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 11:29:38.366201  471577 cri.go:89] found id: "9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8"
	I0812 11:29:38.366229  471577 cri.go:89] found id: ""
	I0812 11:29:38.366239  471577 logs.go:276] 1 containers: [9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8]
	I0812 11:29:38.366304  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.370929  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 11:29:38.370987  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 11:29:38.417359  471577 cri.go:89] found id: "b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e"
	I0812 11:29:38.417388  471577 cri.go:89] found id: ""
	I0812 11:29:38.417398  471577 logs.go:276] 1 containers: [b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e]
	I0812 11:29:38.417457  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.422015  471577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 11:29:38.422084  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 11:29:38.462280  471577 cri.go:89] found id: "02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3"
	I0812 11:29:38.462309  471577 cri.go:89] found id: ""
	I0812 11:29:38.462319  471577 logs.go:276] 1 containers: [02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3]
	I0812 11:29:38.462382  471577 ssh_runner.go:195] Run: which crictl
	I0812 11:29:38.466977  471577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 11:29:38.467045  471577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 11:29:38.517024  471577 cri.go:89] found id: ""
	I0812 11:29:38.517054  471577 logs.go:276] 0 containers: []
	W0812 11:29:38.517065  471577 logs.go:278] No container was found matching "kindnet"
	I0812 11:29:38.517078  471577 logs.go:123] Gathering logs for kube-scheduler [9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8] ...
	I0812 11:29:38.517106  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cecbe71a662270d60c0ebebbfab393c82fbbd80768505f4d21f1877e746e4d8"
	I0812 11:29:38.563936  471577 logs.go:123] Gathering logs for container status ...
	I0812 11:29:38.563972  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0812 11:29:38.618066  471577 logs.go:123] Gathering logs for kubelet ...
	I0812 11:29:38.618099  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0812 11:29:38.671202  471577 logs.go:138] Found kubelet problem: Aug 12 11:27:46 addons-800382 kubelet[1275]: W0812 11:27:46.461293    1275 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	W0812 11:29:38.671368  471577 logs.go:138] Found kubelet problem: Aug 12 11:27:46 addons-800382 kubelet[1275]: E0812 11:27:46.461344    1275 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-800382" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-800382' and this object
	I0812 11:29:38.697792  471577 logs.go:123] Gathering logs for describe nodes ...
	I0812 11:29:38.697820  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0812 11:29:38.832375  471577 logs.go:123] Gathering logs for coredns [0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3] ...
	I0812 11:29:38.832425  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f1973b579f8a547b1aa98cc82f616d201f7578a0aa95f9bac06672b6b6875f3"
	I0812 11:29:38.883431  471577 logs.go:123] Gathering logs for kube-proxy [b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e] ...
	I0812 11:29:38.883460  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b5760ae1bc0e163a8f8d9734db7734fdc7a360a369c00b37be56fdd48add9e"
	I0812 11:29:38.924977  471577 logs.go:123] Gathering logs for kube-controller-manager [02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3] ...
	I0812 11:29:38.925011  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e8f9760c895cab03a42b2667a51e72e89f92d4266f367911572ecbe4cd38e3"
	I0812 11:29:38.987053  471577 logs.go:123] Gathering logs for CRI-O ...
	I0812 11:29:38.987097  471577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-800382 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 node stop m02 -v=7 --alsologtostderr
E0812 12:18:28.460619  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.500214012s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220134-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:17:25.654498  489492 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:17:25.654784  489492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:17:25.654793  489492 out.go:304] Setting ErrFile to fd 2...
	I0812 12:17:25.654798  489492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:17:25.655483  489492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:17:25.655957  489492 mustload.go:65] Loading cluster: ha-220134
	I0812 12:17:25.656884  489492 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:17:25.656908  489492 stop.go:39] StopHost: ha-220134-m02
	I0812 12:17:25.657430  489492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:17:25.657484  489492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:17:25.673925  489492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0812 12:17:25.674472  489492 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:17:25.675110  489492 main.go:141] libmachine: Using API Version  1
	I0812 12:17:25.675140  489492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:17:25.675586  489492 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:17:25.678227  489492 out.go:177] * Stopping node "ha-220134-m02"  ...
	I0812 12:17:25.679626  489492 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 12:17:25.679672  489492 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:17:25.679929  489492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 12:17:25.679973  489492 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:17:25.683068  489492 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:17:25.683575  489492 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:17:25.683600  489492 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:17:25.683730  489492 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:17:25.683947  489492 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:17:25.684102  489492 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:17:25.684342  489492 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:17:25.774320  489492 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 12:17:25.831587  489492 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 12:17:25.888697  489492 main.go:141] libmachine: Stopping "ha-220134-m02"...
	I0812 12:17:25.888728  489492 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:17:25.890421  489492 main.go:141] libmachine: (ha-220134-m02) Calling .Stop
	I0812 12:17:25.894012  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 0/120
	I0812 12:17:26.896077  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 1/120
	I0812 12:17:27.897862  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 2/120
	I0812 12:17:28.899877  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 3/120
	I0812 12:17:29.901372  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 4/120
	I0812 12:17:30.903475  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 5/120
	I0812 12:17:31.905231  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 6/120
	I0812 12:17:32.906561  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 7/120
	I0812 12:17:33.907968  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 8/120
	I0812 12:17:34.909272  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 9/120
	I0812 12:17:35.911596  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 10/120
	I0812 12:17:36.913194  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 11/120
	I0812 12:17:37.914652  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 12/120
	I0812 12:17:38.916609  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 13/120
	I0812 12:17:39.918439  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 14/120
	I0812 12:17:40.920566  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 15/120
	I0812 12:17:41.922239  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 16/120
	I0812 12:17:42.923768  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 17/120
	I0812 12:17:43.925744  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 18/120
	I0812 12:17:44.927144  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 19/120
	I0812 12:17:45.928704  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 20/120
	I0812 12:17:46.930355  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 21/120
	I0812 12:17:47.931843  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 22/120
	I0812 12:17:48.933507  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 23/120
	I0812 12:17:49.935090  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 24/120
	I0812 12:17:50.937199  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 25/120
	I0812 12:17:51.938828  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 26/120
	I0812 12:17:52.940442  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 27/120
	I0812 12:17:53.942016  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 28/120
	I0812 12:17:54.943711  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 29/120
	I0812 12:17:55.945936  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 30/120
	I0812 12:17:56.947440  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 31/120
	I0812 12:17:57.949108  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 32/120
	I0812 12:17:58.950609  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 33/120
	I0812 12:17:59.952101  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 34/120
	I0812 12:18:00.954274  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 35/120
	I0812 12:18:01.956720  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 36/120
	I0812 12:18:02.958436  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 37/120
	I0812 12:18:03.960058  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 38/120
	I0812 12:18:04.961478  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 39/120
	I0812 12:18:05.963649  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 40/120
	I0812 12:18:06.965411  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 41/120
	I0812 12:18:07.967055  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 42/120
	I0812 12:18:08.968448  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 43/120
	I0812 12:18:09.970326  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 44/120
	I0812 12:18:10.972411  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 45/120
	I0812 12:18:11.973963  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 46/120
	I0812 12:18:12.976093  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 47/120
	I0812 12:18:13.978744  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 48/120
	I0812 12:18:14.980898  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 49/120
	I0812 12:18:15.982940  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 50/120
	I0812 12:18:16.985028  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 51/120
	I0812 12:18:17.986293  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 52/120
	I0812 12:18:18.987794  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 53/120
	I0812 12:18:19.989343  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 54/120
	I0812 12:18:20.991413  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 55/120
	I0812 12:18:21.993186  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 56/120
	I0812 12:18:22.994572  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 57/120
	I0812 12:18:23.996151  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 58/120
	I0812 12:18:24.998576  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 59/120
	I0812 12:18:26.000327  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 60/120
	I0812 12:18:27.002764  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 61/120
	I0812 12:18:28.004809  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 62/120
	I0812 12:18:29.006596  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 63/120
	I0812 12:18:30.008132  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 64/120
	I0812 12:18:31.010304  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 65/120
	I0812 12:18:32.012015  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 66/120
	I0812 12:18:33.014125  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 67/120
	I0812 12:18:34.015956  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 68/120
	I0812 12:18:35.017493  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 69/120
	I0812 12:18:36.019673  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 70/120
	I0812 12:18:37.021634  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 71/120
	I0812 12:18:38.023149  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 72/120
	I0812 12:18:39.024648  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 73/120
	I0812 12:18:40.025940  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 74/120
	I0812 12:18:41.028061  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 75/120
	I0812 12:18:42.029994  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 76/120
	I0812 12:18:43.031796  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 77/120
	I0812 12:18:44.034002  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 78/120
	I0812 12:18:45.035352  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 79/120
	I0812 12:18:46.036967  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 80/120
	I0812 12:18:47.038435  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 81/120
	I0812 12:18:48.039875  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 82/120
	I0812 12:18:49.041708  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 83/120
	I0812 12:18:50.043473  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 84/120
	I0812 12:18:51.045813  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 85/120
	I0812 12:18:52.047353  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 86/120
	I0812 12:18:53.048947  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 87/120
	I0812 12:18:54.050300  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 88/120
	I0812 12:18:55.051990  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 89/120
	I0812 12:18:56.054419  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 90/120
	I0812 12:18:57.057028  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 91/120
	I0812 12:18:58.058644  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 92/120
	I0812 12:18:59.059924  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 93/120
	I0812 12:19:00.061764  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 94/120
	I0812 12:19:01.064017  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 95/120
	I0812 12:19:02.065454  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 96/120
	I0812 12:19:03.066674  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 97/120
	I0812 12:19:04.068103  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 98/120
	I0812 12:19:05.069667  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 99/120
	I0812 12:19:06.071548  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 100/120
	I0812 12:19:07.073043  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 101/120
	I0812 12:19:08.074446  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 102/120
	I0812 12:19:09.076074  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 103/120
	I0812 12:19:10.077543  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 104/120
	I0812 12:19:11.079312  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 105/120
	I0812 12:19:12.080847  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 106/120
	I0812 12:19:13.082341  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 107/120
	I0812 12:19:14.083867  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 108/120
	I0812 12:19:15.086046  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 109/120
	I0812 12:19:16.088421  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 110/120
	I0812 12:19:17.090425  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 111/120
	I0812 12:19:18.092062  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 112/120
	I0812 12:19:19.093697  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 113/120
	I0812 12:19:20.094968  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 114/120
	I0812 12:19:21.097037  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 115/120
	I0812 12:19:22.098380  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 116/120
	I0812 12:19:23.100391  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 117/120
	I0812 12:19:24.102170  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 118/120
	I0812 12:19:25.103948  489492 main.go:141] libmachine: (ha-220134-m02) Waiting for machine to stop 119/120
	I0812 12:19:26.104686  489492 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 12:19:26.104892  489492 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-220134 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (19.240846119s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:19:26.158415  490361 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:19:26.158563  490361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:26.158576  490361 out.go:304] Setting ErrFile to fd 2...
	I0812 12:19:26.158581  490361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:26.158870  490361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:19:26.159088  490361 out.go:298] Setting JSON to false
	I0812 12:19:26.159117  490361 mustload.go:65] Loading cluster: ha-220134
	I0812 12:19:26.159217  490361 notify.go:220] Checking for updates...
	I0812 12:19:26.159631  490361 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:19:26.159659  490361 status.go:255] checking status of ha-220134 ...
	I0812 12:19:26.160178  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.160253  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.181238  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0812 12:19:26.181769  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.182400  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.182460  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.182838  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.183048  490361 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:19:26.185048  490361 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:19:26.185068  490361 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:26.185447  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.185494  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.200895  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0812 12:19:26.201453  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.202076  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.202125  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.202450  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.202682  490361 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:19:26.205742  490361 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:26.206159  490361 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:26.206186  490361 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:26.206336  490361 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:26.206642  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.206682  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.223350  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I0812 12:19:26.223900  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.224401  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.224429  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.224790  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.224988  490361 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:19:26.225221  490361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:26.225265  490361 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:19:26.228603  490361 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:26.229194  490361 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:26.229241  490361 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:26.229496  490361 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:19:26.229686  490361 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:19:26.229878  490361 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:19:26.230072  490361 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:19:26.312078  490361 ssh_runner.go:195] Run: systemctl --version
	I0812 12:19:26.320636  490361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:26.338994  490361 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:26.339032  490361 api_server.go:166] Checking apiserver status ...
	I0812 12:19:26.339103  490361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:26.356397  490361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:19:26.367491  490361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:26.367561  490361 ssh_runner.go:195] Run: ls
	I0812 12:19:26.372958  490361 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:26.378708  490361 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:26.378736  490361 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:19:26.378751  490361 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:26.378768  490361 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:19:26.379175  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.379221  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.396202  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43963
	I0812 12:19:26.396651  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.397231  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.397255  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.397582  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.397843  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:19:26.399420  490361 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:19:26.399438  490361 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:26.399726  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.399767  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.415554  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0812 12:19:26.416004  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.416580  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.416598  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.416937  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.417204  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:19:26.420114  490361 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:26.420588  490361 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:26.420614  490361 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:26.420806  490361 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:26.421236  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:26.421294  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:26.436893  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37421
	I0812 12:19:26.437336  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:26.437933  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:26.437956  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:26.438275  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:26.438520  490361 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:19:26.438734  490361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:26.438755  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:19:26.441714  490361 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:26.442146  490361 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:26.442173  490361 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:26.442340  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:19:26.442523  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:19:26.442669  490361 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:19:26.442804  490361 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:19:44.973313  490361 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:19:44.973495  490361 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:19:44.973522  490361 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:44.973540  490361 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:19:44.973562  490361 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:44.973572  490361 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:19:44.973929  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:44.973981  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:44.990516  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0812 12:19:44.991025  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:44.991483  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:44.991511  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:44.991826  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:44.992028  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:19:44.993524  490361 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:19:44.993547  490361 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:44.993835  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:44.993878  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:45.009067  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I0812 12:19:45.009640  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:45.010214  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:45.010238  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:45.010605  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:45.010854  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:19:45.013891  490361 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:45.014347  490361 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:45.014376  490361 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:45.014567  490361 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:45.015011  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:45.015070  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:45.032221  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0812 12:19:45.032754  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:45.033361  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:45.033383  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:45.033698  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:45.033899  490361 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:19:45.034087  490361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:45.034112  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:19:45.036849  490361 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:45.037343  490361 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:45.037368  490361 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:45.037546  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:19:45.037718  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:19:45.037852  490361 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:19:45.037972  490361 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:19:45.122462  490361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:45.141405  490361 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:45.141446  490361 api_server.go:166] Checking apiserver status ...
	I0812 12:19:45.141495  490361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:45.159822  490361 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:19:45.169221  490361 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:45.169298  490361 ssh_runner.go:195] Run: ls
	I0812 12:19:45.174026  490361 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:45.178397  490361 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:45.178424  490361 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:19:45.178433  490361 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:45.178448  490361 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:19:45.178811  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:45.178861  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:45.194573  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0812 12:19:45.195075  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:45.195573  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:45.195595  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:45.195911  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:45.196106  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:19:45.197732  490361 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:19:45.197750  490361 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:45.198039  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:45.198104  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:45.214876  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0812 12:19:45.215343  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:45.215855  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:45.215876  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:45.216288  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:45.216487  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:19:45.219291  490361 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:45.219789  490361 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:45.219819  490361 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:45.219942  490361 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:45.220260  490361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:45.220307  490361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:45.236365  490361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0812 12:19:45.236844  490361 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:45.237355  490361 main.go:141] libmachine: Using API Version  1
	I0812 12:19:45.237375  490361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:45.237786  490361 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:45.238030  490361 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:19:45.238239  490361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:45.238262  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:19:45.240861  490361 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:45.241254  490361 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:45.241283  490361 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:45.241416  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:19:45.241584  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:19:45.241736  490361 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:19:45.241884  490361 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:19:45.327730  490361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:45.345296  490361 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220134 -n ha-220134
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220134 logs -n 25: (1.504408746s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m03_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m04 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp testdata/cp-test.txt                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m04_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03:/home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m03 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-220134 node stop m02 -v=7                                                    | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:11:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:11:33.186100  485208 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:11:33.186217  485208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:33.186226  485208 out.go:304] Setting ErrFile to fd 2...
	I0812 12:11:33.186230  485208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:33.186423  485208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:11:33.187021  485208 out.go:298] Setting JSON to false
	I0812 12:11:33.188089  485208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14024,"bootTime":1723450669,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:11:33.188149  485208 start.go:139] virtualization: kvm guest
	I0812 12:11:33.190527  485208 out.go:177] * [ha-220134] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:11:33.192169  485208 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:11:33.192185  485208 notify.go:220] Checking for updates...
	I0812 12:11:33.195024  485208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:11:33.196400  485208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:11:33.198120  485208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.199635  485208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:11:33.201070  485208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:11:33.202724  485208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:11:33.239881  485208 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:11:33.241290  485208 start.go:297] selected driver: kvm2
	I0812 12:11:33.241314  485208 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:11:33.241327  485208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:11:33.242088  485208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:11:33.242171  485208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:11:33.258266  485208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:11:33.258321  485208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:11:33.258544  485208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:11:33.258612  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:11:33.258621  485208 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 12:11:33.258631  485208 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 12:11:33.258691  485208 start.go:340] cluster config:
	{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0812 12:11:33.258822  485208 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:11:33.261782  485208 out.go:177] * Starting "ha-220134" primary control-plane node in "ha-220134" cluster
	I0812 12:11:33.263232  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:11:33.263278  485208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:11:33.263289  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:11:33.263400  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:11:33.263419  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:11:33.263759  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:11:33.263784  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json: {Name:mk32ee8146005faf70784d964d2eaca91fba2ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:11:33.263936  485208 start.go:360] acquireMachinesLock for ha-220134: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:11:33.263965  485208 start.go:364] duration metric: took 15.961µs to acquireMachinesLock for "ha-220134"
	I0812 12:11:33.263982  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:11:33.264051  485208 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:11:33.265763  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:11:33.265937  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:33.265990  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:33.280982  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0812 12:11:33.281491  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:33.282123  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:11:33.282145  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:33.282557  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:33.282783  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:33.282962  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:33.283144  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:11:33.283174  485208 client.go:168] LocalClient.Create starting
	I0812 12:11:33.283224  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:11:33.283274  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:11:33.283299  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:11:33.283394  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:11:33.283423  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:11:33.283442  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:11:33.283467  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:11:33.283486  485208 main.go:141] libmachine: (ha-220134) Calling .PreCreateCheck
	I0812 12:11:33.283834  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:33.284239  485208 main.go:141] libmachine: Creating machine...
	I0812 12:11:33.284255  485208 main.go:141] libmachine: (ha-220134) Calling .Create
	I0812 12:11:33.284390  485208 main.go:141] libmachine: (ha-220134) Creating KVM machine...
	I0812 12:11:33.285498  485208 main.go:141] libmachine: (ha-220134) DBG | found existing default KVM network
	I0812 12:11:33.286220  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.286052  485231 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0812 12:11:33.286246  485208 main.go:141] libmachine: (ha-220134) DBG | created network xml: 
	I0812 12:11:33.286262  485208 main.go:141] libmachine: (ha-220134) DBG | <network>
	I0812 12:11:33.286272  485208 main.go:141] libmachine: (ha-220134) DBG |   <name>mk-ha-220134</name>
	I0812 12:11:33.286302  485208 main.go:141] libmachine: (ha-220134) DBG |   <dns enable='no'/>
	I0812 12:11:33.286327  485208 main.go:141] libmachine: (ha-220134) DBG |   
	I0812 12:11:33.286340  485208 main.go:141] libmachine: (ha-220134) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 12:11:33.286349  485208 main.go:141] libmachine: (ha-220134) DBG |     <dhcp>
	I0812 12:11:33.286368  485208 main.go:141] libmachine: (ha-220134) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 12:11:33.286383  485208 main.go:141] libmachine: (ha-220134) DBG |     </dhcp>
	I0812 12:11:33.286396  485208 main.go:141] libmachine: (ha-220134) DBG |   </ip>
	I0812 12:11:33.286406  485208 main.go:141] libmachine: (ha-220134) DBG |   
	I0812 12:11:33.286440  485208 main.go:141] libmachine: (ha-220134) DBG | </network>
	I0812 12:11:33.286466  485208 main.go:141] libmachine: (ha-220134) DBG | 
	I0812 12:11:33.291860  485208 main.go:141] libmachine: (ha-220134) DBG | trying to create private KVM network mk-ha-220134 192.168.39.0/24...
	I0812 12:11:33.360018  485208 main.go:141] libmachine: (ha-220134) DBG | private KVM network mk-ha-220134 192.168.39.0/24 created
	I0812 12:11:33.360053  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.360000  485231 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.360066  485208 main.go:141] libmachine: (ha-220134) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 ...
	I0812 12:11:33.360082  485208 main.go:141] libmachine: (ha-220134) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:11:33.360215  485208 main.go:141] libmachine: (ha-220134) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:11:33.640396  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.640225  485231 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa...
	I0812 12:11:33.752867  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.752706  485231 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/ha-220134.rawdisk...
	I0812 12:11:33.752897  485208 main.go:141] libmachine: (ha-220134) DBG | Writing magic tar header
	I0812 12:11:33.752910  485208 main.go:141] libmachine: (ha-220134) DBG | Writing SSH key tar header
	I0812 12:11:33.752921  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.752830  485231 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 ...
	I0812 12:11:33.752932  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134
	I0812 12:11:33.752942  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:11:33.752952  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.752978  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:11:33.752991  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:11:33.752999  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:11:33.753009  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home
	I0812 12:11:33.753021  485208 main.go:141] libmachine: (ha-220134) DBG | Skipping /home - not owner
	I0812 12:11:33.753052  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 (perms=drwx------)
	I0812 12:11:33.753074  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:11:33.753104  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:11:33.753119  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:11:33.753133  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:11:33.753147  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:11:33.753161  485208 main.go:141] libmachine: (ha-220134) Creating domain...
	I0812 12:11:33.754324  485208 main.go:141] libmachine: (ha-220134) define libvirt domain using xml: 
	I0812 12:11:33.754353  485208 main.go:141] libmachine: (ha-220134) <domain type='kvm'>
	I0812 12:11:33.754364  485208 main.go:141] libmachine: (ha-220134)   <name>ha-220134</name>
	I0812 12:11:33.754375  485208 main.go:141] libmachine: (ha-220134)   <memory unit='MiB'>2200</memory>
	I0812 12:11:33.754381  485208 main.go:141] libmachine: (ha-220134)   <vcpu>2</vcpu>
	I0812 12:11:33.754386  485208 main.go:141] libmachine: (ha-220134)   <features>
	I0812 12:11:33.754391  485208 main.go:141] libmachine: (ha-220134)     <acpi/>
	I0812 12:11:33.754396  485208 main.go:141] libmachine: (ha-220134)     <apic/>
	I0812 12:11:33.754401  485208 main.go:141] libmachine: (ha-220134)     <pae/>
	I0812 12:11:33.754416  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754423  485208 main.go:141] libmachine: (ha-220134)   </features>
	I0812 12:11:33.754428  485208 main.go:141] libmachine: (ha-220134)   <cpu mode='host-passthrough'>
	I0812 12:11:33.754434  485208 main.go:141] libmachine: (ha-220134)   
	I0812 12:11:33.754438  485208 main.go:141] libmachine: (ha-220134)   </cpu>
	I0812 12:11:33.754444  485208 main.go:141] libmachine: (ha-220134)   <os>
	I0812 12:11:33.754449  485208 main.go:141] libmachine: (ha-220134)     <type>hvm</type>
	I0812 12:11:33.754488  485208 main.go:141] libmachine: (ha-220134)     <boot dev='cdrom'/>
	I0812 12:11:33.754520  485208 main.go:141] libmachine: (ha-220134)     <boot dev='hd'/>
	I0812 12:11:33.754536  485208 main.go:141] libmachine: (ha-220134)     <bootmenu enable='no'/>
	I0812 12:11:33.754548  485208 main.go:141] libmachine: (ha-220134)   </os>
	I0812 12:11:33.754561  485208 main.go:141] libmachine: (ha-220134)   <devices>
	I0812 12:11:33.754577  485208 main.go:141] libmachine: (ha-220134)     <disk type='file' device='cdrom'>
	I0812 12:11:33.754593  485208 main.go:141] libmachine: (ha-220134)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/boot2docker.iso'/>
	I0812 12:11:33.754609  485208 main.go:141] libmachine: (ha-220134)       <target dev='hdc' bus='scsi'/>
	I0812 12:11:33.754623  485208 main.go:141] libmachine: (ha-220134)       <readonly/>
	I0812 12:11:33.754635  485208 main.go:141] libmachine: (ha-220134)     </disk>
	I0812 12:11:33.754649  485208 main.go:141] libmachine: (ha-220134)     <disk type='file' device='disk'>
	I0812 12:11:33.754663  485208 main.go:141] libmachine: (ha-220134)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:11:33.754677  485208 main.go:141] libmachine: (ha-220134)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/ha-220134.rawdisk'/>
	I0812 12:11:33.754701  485208 main.go:141] libmachine: (ha-220134)       <target dev='hda' bus='virtio'/>
	I0812 12:11:33.754714  485208 main.go:141] libmachine: (ha-220134)     </disk>
	I0812 12:11:33.754727  485208 main.go:141] libmachine: (ha-220134)     <interface type='network'>
	I0812 12:11:33.754741  485208 main.go:141] libmachine: (ha-220134)       <source network='mk-ha-220134'/>
	I0812 12:11:33.754753  485208 main.go:141] libmachine: (ha-220134)       <model type='virtio'/>
	I0812 12:11:33.754765  485208 main.go:141] libmachine: (ha-220134)     </interface>
	I0812 12:11:33.754778  485208 main.go:141] libmachine: (ha-220134)     <interface type='network'>
	I0812 12:11:33.754794  485208 main.go:141] libmachine: (ha-220134)       <source network='default'/>
	I0812 12:11:33.754807  485208 main.go:141] libmachine: (ha-220134)       <model type='virtio'/>
	I0812 12:11:33.754818  485208 main.go:141] libmachine: (ha-220134)     </interface>
	I0812 12:11:33.754830  485208 main.go:141] libmachine: (ha-220134)     <serial type='pty'>
	I0812 12:11:33.754862  485208 main.go:141] libmachine: (ha-220134)       <target port='0'/>
	I0812 12:11:33.754878  485208 main.go:141] libmachine: (ha-220134)     </serial>
	I0812 12:11:33.754893  485208 main.go:141] libmachine: (ha-220134)     <console type='pty'>
	I0812 12:11:33.754906  485208 main.go:141] libmachine: (ha-220134)       <target type='serial' port='0'/>
	I0812 12:11:33.754923  485208 main.go:141] libmachine: (ha-220134)     </console>
	I0812 12:11:33.754936  485208 main.go:141] libmachine: (ha-220134)     <rng model='virtio'>
	I0812 12:11:33.754948  485208 main.go:141] libmachine: (ha-220134)       <backend model='random'>/dev/random</backend>
	I0812 12:11:33.754961  485208 main.go:141] libmachine: (ha-220134)     </rng>
	I0812 12:11:33.754970  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754985  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754997  485208 main.go:141] libmachine: (ha-220134)   </devices>
	I0812 12:11:33.755002  485208 main.go:141] libmachine: (ha-220134) </domain>
	I0812 12:11:33.755012  485208 main.go:141] libmachine: (ha-220134) 
	I0812 12:11:33.759352  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:03:67:f1 in network default
	I0812 12:11:33.760110  485208 main.go:141] libmachine: (ha-220134) Ensuring networks are active...
	I0812 12:11:33.760131  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:33.760878  485208 main.go:141] libmachine: (ha-220134) Ensuring network default is active
	I0812 12:11:33.761266  485208 main.go:141] libmachine: (ha-220134) Ensuring network mk-ha-220134 is active
	I0812 12:11:33.761880  485208 main.go:141] libmachine: (ha-220134) Getting domain xml...
	I0812 12:11:33.762678  485208 main.go:141] libmachine: (ha-220134) Creating domain...
	I0812 12:11:34.975900  485208 main.go:141] libmachine: (ha-220134) Waiting to get IP...
	I0812 12:11:34.976768  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:34.977206  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:34.977230  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:34.977181  485231 retry.go:31] will retry after 288.895038ms: waiting for machine to come up
	I0812 12:11:35.267763  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:35.268298  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:35.268326  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:35.268241  485231 retry.go:31] will retry after 387.612987ms: waiting for machine to come up
	I0812 12:11:35.657979  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:35.658474  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:35.658501  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:35.658431  485231 retry.go:31] will retry after 449.177651ms: waiting for machine to come up
	I0812 12:11:36.109210  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:36.109686  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:36.109711  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:36.109613  485231 retry.go:31] will retry after 395.683299ms: waiting for machine to come up
	I0812 12:11:36.507341  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:36.507826  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:36.507856  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:36.507771  485231 retry.go:31] will retry after 725.500863ms: waiting for machine to come up
	I0812 12:11:37.235267  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:37.235665  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:37.235694  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:37.235627  485231 retry.go:31] will retry after 798.697333ms: waiting for machine to come up
	I0812 12:11:38.035576  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:38.036019  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:38.036062  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:38.035946  485231 retry.go:31] will retry after 872.844105ms: waiting for machine to come up
	I0812 12:11:38.910826  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:38.911218  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:38.911249  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:38.911175  485231 retry.go:31] will retry after 985.561572ms: waiting for machine to come up
	I0812 12:11:39.899617  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:39.900083  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:39.900108  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:39.900021  485231 retry.go:31] will retry after 1.598872532s: waiting for machine to come up
	I0812 12:11:41.500937  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:41.501445  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:41.501476  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:41.501385  485231 retry.go:31] will retry after 2.324192549s: waiting for machine to come up
	I0812 12:11:43.826795  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:43.827291  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:43.827323  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:43.827230  485231 retry.go:31] will retry after 2.849217598s: waiting for machine to come up
	I0812 12:11:46.680256  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:46.680620  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:46.680645  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:46.680593  485231 retry.go:31] will retry after 3.064622363s: waiting for machine to come up
	I0812 12:11:49.747477  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:49.747946  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:49.747971  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:49.747895  485231 retry.go:31] will retry after 3.790371548s: waiting for machine to come up
	I0812 12:11:53.539642  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.539997  485208 main.go:141] libmachine: (ha-220134) Found IP for machine: 192.168.39.228
	I0812 12:11:53.540031  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has current primary IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.540040  485208 main.go:141] libmachine: (ha-220134) Reserving static IP address...
	I0812 12:11:53.540360  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find host DHCP lease matching {name: "ha-220134", mac: "52:54:00:91:2e:31", ip: "192.168.39.228"} in network mk-ha-220134
	I0812 12:11:53.617206  485208 main.go:141] libmachine: (ha-220134) DBG | Getting to WaitForSSH function...
	I0812 12:11:53.617243  485208 main.go:141] libmachine: (ha-220134) Reserved static IP address: 192.168.39.228
	I0812 12:11:53.617258  485208 main.go:141] libmachine: (ha-220134) Waiting for SSH to be available...
	I0812 12:11:53.619839  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.620303  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.620336  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.620396  485208 main.go:141] libmachine: (ha-220134) DBG | Using SSH client type: external
	I0812 12:11:53.620419  485208 main.go:141] libmachine: (ha-220134) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa (-rw-------)
	I0812 12:11:53.620445  485208 main.go:141] libmachine: (ha-220134) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:11:53.620463  485208 main.go:141] libmachine: (ha-220134) DBG | About to run SSH command:
	I0812 12:11:53.620474  485208 main.go:141] libmachine: (ha-220134) DBG | exit 0
	I0812 12:11:53.741422  485208 main.go:141] libmachine: (ha-220134) DBG | SSH cmd err, output: <nil>: 
	I0812 12:11:53.741716  485208 main.go:141] libmachine: (ha-220134) KVM machine creation complete!
	I0812 12:11:53.742080  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:53.742714  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:53.742909  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:53.743101  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:11:53.743118  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:11:53.744621  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:11:53.744636  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:11:53.744641  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:11:53.744647  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.746912  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.747241  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.747267  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.747414  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.747607  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.747745  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.747869  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.748222  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.748450  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.748468  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:11:53.848653  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:11:53.848674  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:11:53.848682  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.851655  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.852060  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.852091  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.852272  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.852505  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.852677  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.852860  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.853067  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.853298  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.853312  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:11:53.954357  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:11:53.954469  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:11:53.954480  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:11:53.954489  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:53.954863  485208 buildroot.go:166] provisioning hostname "ha-220134"
	I0812 12:11:53.954897  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:53.955102  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.957563  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.957924  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.957956  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.958082  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.958292  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.958468  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.958612  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.958777  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.958968  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.958982  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134 && echo "ha-220134" | sudo tee /etc/hostname
	I0812 12:11:54.072834  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:11:54.072867  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.076065  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.076467  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.076503  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.076665  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.076919  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.077072  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.077278  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.077483  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.077714  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.077741  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:11:54.186128  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:11:54.186164  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:11:54.186228  485208 buildroot.go:174] setting up certificates
	I0812 12:11:54.186239  485208 provision.go:84] configureAuth start
	I0812 12:11:54.186252  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:54.186574  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:54.189163  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.189491  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.189533  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.189599  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.191953  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.192339  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.192365  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.192507  485208 provision.go:143] copyHostCerts
	I0812 12:11:54.192544  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:11:54.192612  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:11:54.192623  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:11:54.192717  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:11:54.192870  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:11:54.192904  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:11:54.192915  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:11:54.192957  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:11:54.193021  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:11:54.193046  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:11:54.193055  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:11:54.193100  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:11:54.193166  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134 san=[127.0.0.1 192.168.39.228 ha-220134 localhost minikube]
	I0812 12:11:54.372749  485208 provision.go:177] copyRemoteCerts
	I0812 12:11:54.372827  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:11:54.372857  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.375849  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.376400  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.376425  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.376748  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.377033  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.377293  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.377482  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:54.460265  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:11:54.460342  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:11:54.485745  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:11:54.485834  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0812 12:11:54.510499  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:11:54.510602  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 12:11:54.535012  485208 provision.go:87] duration metric: took 348.757151ms to configureAuth
	I0812 12:11:54.535041  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:11:54.535266  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:11:54.535398  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.538016  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.538399  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.538426  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.538633  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.538838  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.539025  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.539154  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.539302  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.539611  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.539636  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:11:54.817462  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:11:54.817498  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:11:54.817523  485208 main.go:141] libmachine: (ha-220134) Calling .GetURL
	I0812 12:11:54.819130  485208 main.go:141] libmachine: (ha-220134) DBG | Using libvirt version 6000000
	I0812 12:11:54.821645  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.821997  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.822034  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.822192  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:11:54.822212  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:11:54.822222  485208 client.go:171] duration metric: took 21.539035903s to LocalClient.Create
	I0812 12:11:54.822258  485208 start.go:167] duration metric: took 21.539114148s to libmachine.API.Create "ha-220134"
	I0812 12:11:54.822272  485208 start.go:293] postStartSetup for "ha-220134" (driver="kvm2")
	I0812 12:11:54.822287  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:11:54.822312  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:54.822652  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:11:54.822679  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.825308  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.825675  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.825703  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.825845  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.826086  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.826291  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.826425  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:54.908273  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:11:54.912764  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:11:54.912801  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:11:54.912880  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:11:54.913006  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:11:54.913021  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:11:54.913207  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:11:54.922687  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:11:54.947648  485208 start.go:296] duration metric: took 125.360245ms for postStartSetup
	I0812 12:11:54.947706  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:54.948799  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:54.952002  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.952329  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.952361  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.952580  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:11:54.952827  485208 start.go:128] duration metric: took 21.688764926s to createHost
	I0812 12:11:54.952857  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.954861  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.955171  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.955192  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.955351  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.955545  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.955722  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.955864  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.956022  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.956186  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.956197  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:11:55.054036  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464715.025192423
	
	I0812 12:11:55.054074  485208 fix.go:216] guest clock: 1723464715.025192423
	I0812 12:11:55.054083  485208 fix.go:229] Guest: 2024-08-12 12:11:55.025192423 +0000 UTC Remote: 2024-08-12 12:11:54.952841314 +0000 UTC m=+21.803416181 (delta=72.351109ms)
	I0812 12:11:55.054107  485208 fix.go:200] guest clock delta is within tolerance: 72.351109ms
	I0812 12:11:55.054112  485208 start.go:83] releasing machines lock for "ha-220134", held for 21.790139043s
	I0812 12:11:55.054136  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.054485  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:55.057190  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.057503  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.057531  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.057677  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058144  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058320  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058422  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:11:55.058478  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:55.058583  485208 ssh_runner.go:195] Run: cat /version.json
	I0812 12:11:55.058607  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:55.061184  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061361  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061577  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.061610  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061762  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.061764  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:55.061790  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061970  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:55.062042  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:55.062125  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:55.062243  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:55.062258  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:55.062378  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:55.062533  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:55.155726  485208 ssh_runner.go:195] Run: systemctl --version
	I0812 12:11:55.161772  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:11:55.322700  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:11:55.328524  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:11:55.328599  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:11:55.344607  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:11:55.344642  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:11:55.344710  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:11:55.361606  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:11:55.375767  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:11:55.375839  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:11:55.390879  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:11:55.405785  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:11:55.524336  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:11:55.686262  485208 docker.go:233] disabling docker service ...
	I0812 12:11:55.686364  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:11:55.700694  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:11:55.714050  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:11:55.838343  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:11:55.960857  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:11:55.974783  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:11:55.993794  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:11:55.993871  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.004591  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:11:56.004677  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.015367  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.026246  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.036926  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:11:56.047567  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.058000  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.076139  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.086872  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:11:56.096377  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:11:56.096467  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:11:56.109476  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:11:56.119668  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:11:56.246639  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:11:56.404629  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:11:56.404713  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:11:56.409594  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:11:56.409656  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:11:56.413572  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:11:56.450863  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:11:56.450977  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:11:56.480838  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:11:56.512289  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:11:56.513499  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:56.516052  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:56.516417  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:56.516438  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:56.516720  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:11:56.521033  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:11:56.534125  485208 kubeadm.go:883] updating cluster {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:11:56.534243  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:11:56.534290  485208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:11:56.565035  485208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:11:56.565136  485208 ssh_runner.go:195] Run: which lz4
	I0812 12:11:56.569041  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0812 12:11:56.569157  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:11:56.573362  485208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:11:56.573390  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:11:58.007854  485208 crio.go:462] duration metric: took 1.438727808s to copy over tarball
	I0812 12:11:58.007937  485208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:12:00.192513  485208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.184538664s)
	I0812 12:12:00.192549  485208 crio.go:469] duration metric: took 2.184663391s to extract the tarball
	I0812 12:12:00.192558  485208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:12:00.231017  485208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:12:00.281405  485208 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:12:00.281437  485208 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:12:00.281447  485208 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.30.3 crio true true} ...
	I0812 12:12:00.281589  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:12:00.281686  485208 ssh_runner.go:195] Run: crio config
	I0812 12:12:00.329283  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:12:00.329306  485208 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 12:12:00.329316  485208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:12:00.329340  485208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220134 NodeName:ha-220134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:12:00.329487  485208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220134"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:12:00.329510  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:12:00.329557  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:12:00.346734  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:12:00.346882  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:12:00.346958  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:00.357489  485208 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:12:00.357565  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 12:12:00.367309  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0812 12:12:00.383963  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:12:00.400920  485208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0812 12:12:00.417671  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0812 12:12:00.434262  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:12:00.438431  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:12:00.450706  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:12:00.579801  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:12:00.597577  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.228
	I0812 12:12:00.597603  485208 certs.go:194] generating shared ca certs ...
	I0812 12:12:00.597620  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.597789  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:12:00.597850  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:12:00.597861  485208 certs.go:256] generating profile certs ...
	I0812 12:12:00.597916  485208 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:12:00.597942  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt with IP's: []
	I0812 12:12:00.677939  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt ...
	I0812 12:12:00.677974  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt: {Name:mk9fafa446d8b28b9f7b65115def1ce5a05d4c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.678176  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key ...
	I0812 12:12:00.678194  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key: {Name:mk4353a7608a6c005e7bf75fcd414510302dc630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.678310  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3
	I0812 12:12:00.678338  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.254]
	I0812 12:12:00.762928  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 ...
	I0812 12:12:00.762959  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3: {Name:mkd955c01dada19619c74559758a76b9fc4239c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.763137  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3 ...
	I0812 12:12:00.763150  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3: {Name:mk45e5f4c537690b3c1c8e44623614717bdeb3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.763214  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:12:00.763282  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:12:00.763354  485208 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:12:00.763368  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt with IP's: []
	I0812 12:12:00.899121  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt ...
	I0812 12:12:00.899154  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt: {Name:mkeb87ac702b51eb8807073957337d78c2486afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.899327  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key ...
	I0812 12:12:00.899338  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key: {Name:mkc7e9a0b81dcf49c56951bce088c2c205615598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.899415  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:12:00.899431  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:12:00.899442  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:12:00.899455  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:12:00.899467  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:12:00.899479  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:12:00.899501  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:12:00.899513  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:12:00.899563  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:12:00.899605  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:12:00.899613  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:12:00.899632  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:12:00.899654  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:12:00.899676  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:12:00.899713  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:00.899742  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:12:00.899755  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:12:00.899768  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:00.900328  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:12:00.926164  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:12:00.949902  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:12:00.973957  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:12:00.999751  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 12:12:01.024997  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:12:01.052808  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:12:01.079537  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:12:01.103702  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:12:01.132532  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:12:01.158646  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:12:01.187950  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:12:01.208248  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:12:01.214687  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:12:01.226962  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.232011  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.232079  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.238440  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:12:01.249814  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:12:01.261311  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.266352  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.266405  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.272358  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:12:01.284003  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:12:01.295843  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.300572  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.300635  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.306539  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:12:01.318250  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:12:01.322951  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:12:01.323010  485208 kubeadm.go:392] StartCluster: {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:12:01.323088  485208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:12:01.323140  485208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:12:01.361708  485208 cri.go:89] found id: ""
	I0812 12:12:01.361800  485208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:12:01.374462  485208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:12:01.392559  485208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:12:01.404437  485208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:12:01.404455  485208 kubeadm.go:157] found existing configuration files:
	
	I0812 12:12:01.404506  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:12:01.415830  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:12:01.415917  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:12:01.427544  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:12:01.441613  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:12:01.441675  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:12:01.454912  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:12:01.465686  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:12:01.465765  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:12:01.475115  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:12:01.483837  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:12:01.483908  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:12:01.493066  485208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:12:01.600440  485208 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 12:12:01.600526  485208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 12:12:01.720488  485208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 12:12:01.720617  485208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 12:12:01.720757  485208 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 12:12:01.964723  485208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 12:12:02.196686  485208 out.go:204]   - Generating certificates and keys ...
	I0812 12:12:02.196824  485208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 12:12:02.196906  485208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 12:12:02.197568  485208 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 12:12:02.555832  485208 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 12:12:02.706304  485208 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 12:12:02.767137  485208 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 12:12:03.088184  485208 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 12:12:03.088345  485208 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-220134 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0812 12:12:03.167870  485208 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 12:12:03.168076  485208 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-220134 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0812 12:12:03.343957  485208 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 12:12:03.527996  485208 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 12:12:03.668796  485208 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 12:12:03.668976  485208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 12:12:04.004200  485208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 12:12:04.200658  485208 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 12:12:04.651462  485208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 12:12:04.776476  485208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 12:12:04.967615  485208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 12:12:04.968073  485208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 12:12:04.971286  485208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 12:12:04.974689  485208 out.go:204]   - Booting up control plane ...
	I0812 12:12:04.974798  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 12:12:04.974867  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 12:12:04.974981  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 12:12:04.991918  485208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 12:12:04.992859  485208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 12:12:04.992934  485208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 12:12:05.133194  485208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 12:12:05.133322  485208 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 12:12:05.635328  485208 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.236507ms
	I0812 12:12:05.635432  485208 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 12:12:11.543680  485208 kubeadm.go:310] [api-check] The API server is healthy after 5.912552105s
	I0812 12:12:11.566590  485208 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 12:12:11.587583  485208 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 12:12:11.616176  485208 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 12:12:11.616448  485208 kubeadm.go:310] [mark-control-plane] Marking the node ha-220134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 12:12:11.633573  485208 kubeadm.go:310] [bootstrap-token] Using token: ibuffq.8zx5f52ylb7rvh5p
	I0812 12:12:11.635071  485208 out.go:204]   - Configuring RBAC rules ...
	I0812 12:12:11.635253  485208 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 12:12:11.642314  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 12:12:11.653391  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 12:12:11.662006  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 12:12:11.668661  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 12:12:11.674408  485208 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 12:12:11.956406  485208 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 12:12:12.397495  485208 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 12:12:12.957947  485208 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 12:12:12.957977  485208 kubeadm.go:310] 
	I0812 12:12:12.958055  485208 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 12:12:12.958065  485208 kubeadm.go:310] 
	I0812 12:12:12.958194  485208 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 12:12:12.958224  485208 kubeadm.go:310] 
	I0812 12:12:12.958279  485208 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 12:12:12.958358  485208 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 12:12:12.958421  485208 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 12:12:12.958434  485208 kubeadm.go:310] 
	I0812 12:12:12.958502  485208 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 12:12:12.958521  485208 kubeadm.go:310] 
	I0812 12:12:12.958597  485208 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 12:12:12.958607  485208 kubeadm.go:310] 
	I0812 12:12:12.958680  485208 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 12:12:12.958783  485208 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 12:12:12.958871  485208 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 12:12:12.958883  485208 kubeadm.go:310] 
	I0812 12:12:12.958993  485208 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 12:12:12.959118  485208 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 12:12:12.959129  485208 kubeadm.go:310] 
	I0812 12:12:12.959250  485208 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ibuffq.8zx5f52ylb7rvh5p \
	I0812 12:12:12.959394  485208 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f \
	I0812 12:12:12.959425  485208 kubeadm.go:310] 	--control-plane 
	I0812 12:12:12.959432  485208 kubeadm.go:310] 
	I0812 12:12:12.959545  485208 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 12:12:12.959558  485208 kubeadm.go:310] 
	I0812 12:12:12.959643  485208 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ibuffq.8zx5f52ylb7rvh5p \
	I0812 12:12:12.959791  485208 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f 
	I0812 12:12:12.959939  485208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 12:12:12.959963  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:12:12.959972  485208 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 12:12:12.962013  485208 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 12:12:12.963712  485208 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0812 12:12:12.969430  485208 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 12:12:12.969454  485208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0812 12:12:12.989869  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 12:12:13.422197  485208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 12:12:13.422338  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:13.422380  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134 minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=true
	I0812 12:12:13.453066  485208 ops.go:34] apiserver oom_adj: -16
	I0812 12:12:13.607152  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:14.108069  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:14.607602  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:15.107870  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:15.607655  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:16.107346  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:16.607784  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:17.107555  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:17.607365  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:18.107700  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:18.607848  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:19.107912  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:19.607873  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:20.107209  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:20.608203  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:21.108076  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:21.607706  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:22.107644  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:22.607642  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:23.107871  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:23.608104  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:24.107517  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:24.607231  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:25.108221  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:25.204930  485208 kubeadm.go:1113] duration metric: took 11.782675487s to wait for elevateKubeSystemPrivileges
	I0812 12:12:25.204974  485208 kubeadm.go:394] duration metric: took 23.881968454s to StartCluster
	I0812 12:12:25.204998  485208 settings.go:142] acquiring lock: {Name:mke9ed38a916e17fe99baccde568c442d70df1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:25.205115  485208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:12:25.205837  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:25.206097  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 12:12:25.206109  485208 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:12:25.206142  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:12:25.206156  485208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 12:12:25.206246  485208 addons.go:69] Setting storage-provisioner=true in profile "ha-220134"
	I0812 12:12:25.206295  485208 addons.go:234] Setting addon storage-provisioner=true in "ha-220134"
	I0812 12:12:25.206305  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:25.206330  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:25.206259  485208 addons.go:69] Setting default-storageclass=true in profile "ha-220134"
	I0812 12:12:25.206383  485208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-220134"
	I0812 12:12:25.206702  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.206753  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.206817  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.206853  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.222325  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
	I0812 12:12:25.222335  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0812 12:12:25.222893  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.222900  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.223410  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.223410  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.223437  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.223448  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.223872  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.223876  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.224071  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.224404  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.224448  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.226840  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:12:25.227237  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 12:12:25.227882  485208 cert_rotation.go:137] Starting client certificate rotation controller
	I0812 12:12:25.228157  485208 addons.go:234] Setting addon default-storageclass=true in "ha-220134"
	I0812 12:12:25.228203  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:25.228595  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.228648  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.240607  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I0812 12:12:25.241157  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.241678  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.241711  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.242030  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.242249  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.243899  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0812 12:12:25.244190  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:25.244273  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.244859  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.244889  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.245249  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.245860  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.245898  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.246480  485208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 12:12:25.247760  485208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:12:25.247789  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 12:12:25.247810  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:25.250941  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.251481  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:25.251523  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.251755  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:25.252095  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:25.252300  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:25.252451  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:25.262005  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0812 12:12:25.262416  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.262950  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.262979  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.263366  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.263628  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.265034  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:25.265278  485208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 12:12:25.265294  485208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 12:12:25.265311  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:25.268020  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.268411  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:25.268435  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.268586  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:25.268765  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:25.268914  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:25.269112  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:25.317753  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 12:12:25.408150  485208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 12:12:25.438925  485208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:12:25.822035  485208 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 12:12:25.846330  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.846362  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.846702  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.846731  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:25.846745  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.846754  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.847037  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:25.847080  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.847099  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:25.847240  485208 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0812 12:12:25.847254  485208 round_trippers.go:469] Request Headers:
	I0812 12:12:25.847266  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:12:25.847271  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:12:25.854571  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:12:25.855182  485208 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0812 12:12:25.855198  485208 round_trippers.go:469] Request Headers:
	I0812 12:12:25.855207  485208 round_trippers.go:473]     Content-Type: application/json
	I0812 12:12:25.855212  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:12:25.855221  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:12:25.857512  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:12:25.857680  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.857693  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.857975  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:25.858025  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.858034  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.097820  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:26.097856  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:26.098271  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:26.098324  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:26.098332  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.098346  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:26.098354  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:26.098632  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:26.098654  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.098642  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:26.100480  485208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 12:12:26.101781  485208 addons.go:510] duration metric: took 895.613385ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 12:12:26.101836  485208 start.go:246] waiting for cluster config update ...
	I0812 12:12:26.101852  485208 start.go:255] writing updated cluster config ...
	I0812 12:12:26.103379  485208 out.go:177] 
	I0812 12:12:26.104712  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:26.104819  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:26.107048  485208 out.go:177] * Starting "ha-220134-m02" control-plane node in "ha-220134" cluster
	I0812 12:12:26.108313  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:12:26.108350  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:12:26.108464  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:12:26.108480  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:12:26.108557  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:26.108732  485208 start.go:360] acquireMachinesLock for ha-220134-m02: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:12:26.108796  485208 start.go:364] duration metric: took 43.274µs to acquireMachinesLock for "ha-220134-m02"
	I0812 12:12:26.108821  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:12:26.108927  485208 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0812 12:12:26.110341  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:12:26.110441  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:26.110469  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:26.126544  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0812 12:12:26.127053  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:26.127557  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:26.127581  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:26.127911  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:26.128171  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:26.128340  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:26.128586  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:12:26.128616  485208 client.go:168] LocalClient.Create starting
	I0812 12:12:26.128650  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:12:26.128691  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:12:26.128711  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:12:26.128778  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:12:26.128799  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:12:26.128811  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:12:26.128825  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:12:26.128833  485208 main.go:141] libmachine: (ha-220134-m02) Calling .PreCreateCheck
	I0812 12:12:26.129005  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:26.129451  485208 main.go:141] libmachine: Creating machine...
	I0812 12:12:26.129465  485208 main.go:141] libmachine: (ha-220134-m02) Calling .Create
	I0812 12:12:26.129610  485208 main.go:141] libmachine: (ha-220134-m02) Creating KVM machine...
	I0812 12:12:26.130849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found existing default KVM network
	I0812 12:12:26.130996  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found existing private KVM network mk-ha-220134
	I0812 12:12:26.131205  485208 main.go:141] libmachine: (ha-220134-m02) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 ...
	I0812 12:12:26.131238  485208 main.go:141] libmachine: (ha-220134-m02) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:12:26.131302  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.131184  485583 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:12:26.131375  485208 main.go:141] libmachine: (ha-220134-m02) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:12:26.432155  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.431990  485583 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa...
	I0812 12:12:26.836485  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.836306  485583 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/ha-220134-m02.rawdisk...
	I0812 12:12:26.836530  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Writing magic tar header
	I0812 12:12:26.836575  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 (perms=drwx------)
	I0812 12:12:26.836621  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Writing SSH key tar header
	I0812 12:12:26.836635  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:12:26.836650  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:12:26.836659  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:12:26.836670  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:12:26.836686  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:12:26.836699  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.836420  485583 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 ...
	I0812 12:12:26.836713  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02
	I0812 12:12:26.836722  485208 main.go:141] libmachine: (ha-220134-m02) Creating domain...
	I0812 12:12:26.836744  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:12:26.836760  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:12:26.836774  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:12:26.836785  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:12:26.836796  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:12:26.836808  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home
	I0812 12:12:26.836822  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Skipping /home - not owner
	I0812 12:12:26.837790  485208 main.go:141] libmachine: (ha-220134-m02) define libvirt domain using xml: 
	I0812 12:12:26.837818  485208 main.go:141] libmachine: (ha-220134-m02) <domain type='kvm'>
	I0812 12:12:26.837828  485208 main.go:141] libmachine: (ha-220134-m02)   <name>ha-220134-m02</name>
	I0812 12:12:26.837837  485208 main.go:141] libmachine: (ha-220134-m02)   <memory unit='MiB'>2200</memory>
	I0812 12:12:26.837845  485208 main.go:141] libmachine: (ha-220134-m02)   <vcpu>2</vcpu>
	I0812 12:12:26.837855  485208 main.go:141] libmachine: (ha-220134-m02)   <features>
	I0812 12:12:26.837864  485208 main.go:141] libmachine: (ha-220134-m02)     <acpi/>
	I0812 12:12:26.837873  485208 main.go:141] libmachine: (ha-220134-m02)     <apic/>
	I0812 12:12:26.837881  485208 main.go:141] libmachine: (ha-220134-m02)     <pae/>
	I0812 12:12:26.837890  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.837901  485208 main.go:141] libmachine: (ha-220134-m02)   </features>
	I0812 12:12:26.837911  485208 main.go:141] libmachine: (ha-220134-m02)   <cpu mode='host-passthrough'>
	I0812 12:12:26.837921  485208 main.go:141] libmachine: (ha-220134-m02)   
	I0812 12:12:26.837934  485208 main.go:141] libmachine: (ha-220134-m02)   </cpu>
	I0812 12:12:26.837945  485208 main.go:141] libmachine: (ha-220134-m02)   <os>
	I0812 12:12:26.837955  485208 main.go:141] libmachine: (ha-220134-m02)     <type>hvm</type>
	I0812 12:12:26.837962  485208 main.go:141] libmachine: (ha-220134-m02)     <boot dev='cdrom'/>
	I0812 12:12:26.837972  485208 main.go:141] libmachine: (ha-220134-m02)     <boot dev='hd'/>
	I0812 12:12:26.837986  485208 main.go:141] libmachine: (ha-220134-m02)     <bootmenu enable='no'/>
	I0812 12:12:26.837996  485208 main.go:141] libmachine: (ha-220134-m02)   </os>
	I0812 12:12:26.838034  485208 main.go:141] libmachine: (ha-220134-m02)   <devices>
	I0812 12:12:26.838065  485208 main.go:141] libmachine: (ha-220134-m02)     <disk type='file' device='cdrom'>
	I0812 12:12:26.838086  485208 main.go:141] libmachine: (ha-220134-m02)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/boot2docker.iso'/>
	I0812 12:12:26.838097  485208 main.go:141] libmachine: (ha-220134-m02)       <target dev='hdc' bus='scsi'/>
	I0812 12:12:26.838110  485208 main.go:141] libmachine: (ha-220134-m02)       <readonly/>
	I0812 12:12:26.838120  485208 main.go:141] libmachine: (ha-220134-m02)     </disk>
	I0812 12:12:26.838130  485208 main.go:141] libmachine: (ha-220134-m02)     <disk type='file' device='disk'>
	I0812 12:12:26.838148  485208 main.go:141] libmachine: (ha-220134-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:12:26.838164  485208 main.go:141] libmachine: (ha-220134-m02)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/ha-220134-m02.rawdisk'/>
	I0812 12:12:26.838175  485208 main.go:141] libmachine: (ha-220134-m02)       <target dev='hda' bus='virtio'/>
	I0812 12:12:26.838185  485208 main.go:141] libmachine: (ha-220134-m02)     </disk>
	I0812 12:12:26.838208  485208 main.go:141] libmachine: (ha-220134-m02)     <interface type='network'>
	I0812 12:12:26.838230  485208 main.go:141] libmachine: (ha-220134-m02)       <source network='mk-ha-220134'/>
	I0812 12:12:26.838251  485208 main.go:141] libmachine: (ha-220134-m02)       <model type='virtio'/>
	I0812 12:12:26.838264  485208 main.go:141] libmachine: (ha-220134-m02)     </interface>
	I0812 12:12:26.838274  485208 main.go:141] libmachine: (ha-220134-m02)     <interface type='network'>
	I0812 12:12:26.838286  485208 main.go:141] libmachine: (ha-220134-m02)       <source network='default'/>
	I0812 12:12:26.838297  485208 main.go:141] libmachine: (ha-220134-m02)       <model type='virtio'/>
	I0812 12:12:26.838306  485208 main.go:141] libmachine: (ha-220134-m02)     </interface>
	I0812 12:12:26.838313  485208 main.go:141] libmachine: (ha-220134-m02)     <serial type='pty'>
	I0812 12:12:26.838353  485208 main.go:141] libmachine: (ha-220134-m02)       <target port='0'/>
	I0812 12:12:26.838377  485208 main.go:141] libmachine: (ha-220134-m02)     </serial>
	I0812 12:12:26.838388  485208 main.go:141] libmachine: (ha-220134-m02)     <console type='pty'>
	I0812 12:12:26.838397  485208 main.go:141] libmachine: (ha-220134-m02)       <target type='serial' port='0'/>
	I0812 12:12:26.838409  485208 main.go:141] libmachine: (ha-220134-m02)     </console>
	I0812 12:12:26.838416  485208 main.go:141] libmachine: (ha-220134-m02)     <rng model='virtio'>
	I0812 12:12:26.838429  485208 main.go:141] libmachine: (ha-220134-m02)       <backend model='random'>/dev/random</backend>
	I0812 12:12:26.838436  485208 main.go:141] libmachine: (ha-220134-m02)     </rng>
	I0812 12:12:26.838459  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.838477  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.838491  485208 main.go:141] libmachine: (ha-220134-m02)   </devices>
	I0812 12:12:26.838508  485208 main.go:141] libmachine: (ha-220134-m02) </domain>
	I0812 12:12:26.838521  485208 main.go:141] libmachine: (ha-220134-m02) 
	I0812 12:12:26.846325  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:03:92:6e in network default
	I0812 12:12:26.846935  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:26.846954  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring networks are active...
	I0812 12:12:26.847833  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring network default is active
	I0812 12:12:26.848203  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring network mk-ha-220134 is active
	I0812 12:12:26.848670  485208 main.go:141] libmachine: (ha-220134-m02) Getting domain xml...
	I0812 12:12:26.849472  485208 main.go:141] libmachine: (ha-220134-m02) Creating domain...
	I0812 12:12:28.117896  485208 main.go:141] libmachine: (ha-220134-m02) Waiting to get IP...
	I0812 12:12:28.118674  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.119175  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.119218  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.119155  485583 retry.go:31] will retry after 262.905369ms: waiting for machine to come up
	I0812 12:12:28.383737  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.384220  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.384247  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.384169  485583 retry.go:31] will retry after 274.17147ms: waiting for machine to come up
	I0812 12:12:28.660575  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.661106  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.661137  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.661042  485583 retry.go:31] will retry after 326.621097ms: waiting for machine to come up
	I0812 12:12:28.989757  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.990290  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.990317  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.990241  485583 retry.go:31] will retry after 445.162771ms: waiting for machine to come up
	I0812 12:12:29.436700  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:29.437219  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:29.437249  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:29.437167  485583 retry.go:31] will retry after 590.153733ms: waiting for machine to come up
	I0812 12:12:30.029313  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:30.029881  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:30.029912  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:30.029830  485583 retry.go:31] will retry after 932.683171ms: waiting for machine to come up
	I0812 12:12:30.964131  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:30.964693  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:30.964717  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:30.964642  485583 retry.go:31] will retry after 1.16412614s: waiting for machine to come up
	I0812 12:12:32.130419  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:32.130736  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:32.130763  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:32.130695  485583 retry.go:31] will retry after 1.362857789s: waiting for machine to come up
	I0812 12:12:33.495374  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:33.495874  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:33.495913  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:33.495802  485583 retry.go:31] will retry after 1.2101351s: waiting for machine to come up
	I0812 12:12:34.708476  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:34.709004  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:34.709034  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:34.708942  485583 retry.go:31] will retry after 1.883302747s: waiting for machine to come up
	I0812 12:12:36.594343  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:36.594849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:36.594881  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:36.594819  485583 retry.go:31] will retry after 2.391027616s: waiting for machine to come up
	I0812 12:12:38.987566  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:38.988067  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:38.988089  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:38.988028  485583 retry.go:31] will retry after 2.394690775s: waiting for machine to come up
	I0812 12:12:41.383854  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:41.384225  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:41.384255  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:41.384169  485583 retry.go:31] will retry after 3.613894384s: waiting for machine to come up
	I0812 12:12:45.002003  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:45.002449  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:45.002472  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:45.002405  485583 retry.go:31] will retry after 3.766857993s: waiting for machine to come up
	I0812 12:12:48.772357  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.772989  485208 main.go:141] libmachine: (ha-220134-m02) Found IP for machine: 192.168.39.215
	I0812 12:12:48.773012  485208 main.go:141] libmachine: (ha-220134-m02) Reserving static IP address...
	I0812 12:12:48.773026  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has current primary IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.773477  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find host DHCP lease matching {name: "ha-220134-m02", mac: "52:54:00:fc:dc:57", ip: "192.168.39.215"} in network mk-ha-220134
	I0812 12:12:48.852314  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Getting to WaitForSSH function...
	I0812 12:12:48.852359  485208 main.go:141] libmachine: (ha-220134-m02) Reserved static IP address: 192.168.39.215
	I0812 12:12:48.852373  485208 main.go:141] libmachine: (ha-220134-m02) Waiting for SSH to be available...
	I0812 12:12:48.854740  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.855205  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:48.855231  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.855419  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using SSH client type: external
	I0812 12:12:48.855447  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa (-rw-------)
	I0812 12:12:48.855508  485208 main.go:141] libmachine: (ha-220134-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:12:48.855533  485208 main.go:141] libmachine: (ha-220134-m02) DBG | About to run SSH command:
	I0812 12:12:48.855550  485208 main.go:141] libmachine: (ha-220134-m02) DBG | exit 0
	I0812 12:12:48.981611  485208 main.go:141] libmachine: (ha-220134-m02) DBG | SSH cmd err, output: <nil>: 
	I0812 12:12:48.981869  485208 main.go:141] libmachine: (ha-220134-m02) KVM machine creation complete!
	I0812 12:12:48.982242  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:48.982891  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:48.983139  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:48.983324  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:12:48.983339  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:12:48.984780  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:12:48.984799  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:12:48.984807  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:12:48.984816  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:48.987134  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.987559  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:48.987592  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.987724  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:48.987893  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:48.988063  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:48.988220  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:48.988403  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:48.988722  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:48.988737  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:12:49.092550  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:12:49.092574  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:12:49.092583  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.095355  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.095830  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.095857  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.096059  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.096278  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.096482  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.096693  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.096878  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.097070  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.097102  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:12:49.202432  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:12:49.202537  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:12:49.202566  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:12:49.202580  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.202928  485208 buildroot.go:166] provisioning hostname "ha-220134-m02"
	I0812 12:12:49.202965  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.203215  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.206657  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.207060  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.207105  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.207272  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.207507  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.207695  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.207865  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.208069  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.208246  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.208258  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134-m02 && echo "ha-220134-m02" | sudo tee /etc/hostname
	I0812 12:12:49.328118  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134-m02
	
	I0812 12:12:49.328173  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.331055  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.331459  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.331487  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.331685  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.331911  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.332097  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.332230  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.332422  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.332612  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.332629  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:12:49.446865  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:12:49.446910  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:12:49.446941  485208 buildroot.go:174] setting up certificates
	I0812 12:12:49.446956  485208 provision.go:84] configureAuth start
	I0812 12:12:49.446970  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.447372  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:49.450255  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.450653  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.450685  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.450864  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.453310  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.453558  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.453584  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.453732  485208 provision.go:143] copyHostCerts
	I0812 12:12:49.453761  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:12:49.453794  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:12:49.453803  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:12:49.453869  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:12:49.453963  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:12:49.453982  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:12:49.453988  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:12:49.454015  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:12:49.454092  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:12:49.454109  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:12:49.454116  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:12:49.454139  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:12:49.454222  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134-m02 san=[127.0.0.1 192.168.39.215 ha-220134-m02 localhost minikube]
	I0812 12:12:49.543100  485208 provision.go:177] copyRemoteCerts
	I0812 12:12:49.543166  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:12:49.543197  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.546099  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.546414  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.546443  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.546709  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.546929  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.547117  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.547271  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:49.632125  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:12:49.632207  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:12:49.658475  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:12:49.658555  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:12:49.683939  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:12:49.684009  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:12:49.709945  485208 provision.go:87] duration metric: took 262.97201ms to configureAuth
	I0812 12:12:49.709980  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:12:49.710159  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:49.710252  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.713109  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.713455  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.713538  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.713695  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.713907  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.714119  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.714302  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.714455  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.714657  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.714680  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:12:49.984937  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:12:49.984964  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:12:49.984973  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetURL
	I0812 12:12:49.986360  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using libvirt version 6000000
	I0812 12:12:49.988741  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.989181  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.989210  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.989401  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:12:49.989415  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:12:49.989424  485208 client.go:171] duration metric: took 23.860800317s to LocalClient.Create
	I0812 12:12:49.989452  485208 start.go:167] duration metric: took 23.860867443s to libmachine.API.Create "ha-220134"
	I0812 12:12:49.989465  485208 start.go:293] postStartSetup for "ha-220134-m02" (driver="kvm2")
	I0812 12:12:49.989481  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:12:49.989510  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:49.989775  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:12:49.989801  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.992084  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.992400  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.992425  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.992633  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.992875  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.993045  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.993189  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.076449  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:12:50.080833  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:12:50.080866  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:12:50.080940  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:12:50.081038  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:12:50.081053  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:12:50.081202  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:12:50.091441  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:50.118816  485208 start.go:296] duration metric: took 129.330027ms for postStartSetup
	I0812 12:12:50.118877  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:50.119557  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:50.122565  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.122866  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.122887  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.123226  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:50.123424  485208 start.go:128] duration metric: took 24.01448395s to createHost
	I0812 12:12:50.123453  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:50.125600  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.126037  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.126070  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.126232  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.126402  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.126604  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.126757  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.126928  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:50.127093  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:50.127104  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:12:50.234361  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464770.206721376
	
	I0812 12:12:50.234390  485208 fix.go:216] guest clock: 1723464770.206721376
	I0812 12:12:50.234398  485208 fix.go:229] Guest: 2024-08-12 12:12:50.206721376 +0000 UTC Remote: 2024-08-12 12:12:50.123437393 +0000 UTC m=+76.974012260 (delta=83.283983ms)
	I0812 12:12:50.234416  485208 fix.go:200] guest clock delta is within tolerance: 83.283983ms
	I0812 12:12:50.234421  485208 start.go:83] releasing machines lock for "ha-220134-m02", held for 24.125613567s
	I0812 12:12:50.234440  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.234724  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:50.237266  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.237599  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.237630  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.240221  485208 out.go:177] * Found network options:
	I0812 12:12:50.242077  485208 out.go:177]   - NO_PROXY=192.168.39.228
	W0812 12:12:50.243527  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:12:50.243567  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244201  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244431  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244531  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:12:50.244579  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	W0812 12:12:50.244741  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:12:50.244830  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:12:50.244860  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:50.247473  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.247819  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.247897  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.247931  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.248060  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.248147  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.248170  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.248224  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.248392  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.248404  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.248655  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.248656  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.248918  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.249177  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.482875  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:12:50.490758  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:12:50.490864  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:12:50.509960  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:12:50.509991  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:12:50.510078  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:12:50.527618  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:12:50.543614  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:12:50.543694  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:12:50.559822  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:12:50.576001  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:12:50.715009  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:12:50.866600  485208 docker.go:233] disabling docker service ...
	I0812 12:12:50.866685  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:12:50.881392  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:12:50.894903  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:12:51.040092  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:12:51.181349  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:12:51.205762  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:12:51.226430  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:12:51.226502  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.238815  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:12:51.238893  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.250801  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.262713  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.274193  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:12:51.285788  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.297333  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.316344  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.327609  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:12:51.337347  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:12:51.337412  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:12:51.351439  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:12:51.361192  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:12:51.473284  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:12:51.613515  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:12:51.613600  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:12:51.618562  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:12:51.618632  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:12:51.622753  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:12:51.661539  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:12:51.661617  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:12:51.689874  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:12:51.724170  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:12:51.725594  485208 out.go:177]   - env NO_PROXY=192.168.39.228
	I0812 12:12:51.726774  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:51.729472  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:51.729817  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:51.729849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:51.730089  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:12:51.734707  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:12:51.747065  485208 mustload.go:65] Loading cluster: ha-220134
	I0812 12:12:51.747331  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:51.747703  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:51.747737  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:51.762680  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0812 12:12:51.763169  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:51.763671  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:51.763694  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:51.764001  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:51.764187  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:51.765663  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:51.765958  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:51.765980  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:51.781778  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0812 12:12:51.782195  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:51.782678  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:51.782703  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:51.783090  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:51.783342  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:51.783513  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.215
	I0812 12:12:51.783523  485208 certs.go:194] generating shared ca certs ...
	I0812 12:12:51.783537  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.783666  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:12:51.783721  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:12:51.783735  485208 certs.go:256] generating profile certs ...
	I0812 12:12:51.783835  485208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:12:51.783869  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297
	I0812 12:12:51.783885  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.254]
	I0812 12:12:51.989980  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 ...
	I0812 12:12:51.990015  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297: {Name:mk904ce98edd04e7af847e314a39147bd4943a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.990196  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297 ...
	I0812 12:12:51.990210  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297: {Name:mk70d2b31dca95723cdb80442908c3afbe83d830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.990282  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:12:51.990416  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:12:51.990547  485208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:12:51.990565  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:12:51.990579  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:12:51.990590  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:12:51.990600  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:12:51.990610  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:12:51.990620  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:12:51.990628  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:12:51.990638  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:12:51.990685  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:12:51.990716  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:12:51.990726  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:12:51.990746  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:12:51.990767  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:12:51.990797  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:12:51.990844  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:51.990870  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:12:51.990884  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:51.990896  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:12:51.990929  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:51.994763  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:51.995295  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:51.995330  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:51.995544  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:51.995809  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:51.996017  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:51.996163  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:52.069573  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 12:12:52.076273  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 12:12:52.090470  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 12:12:52.095417  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0812 12:12:52.108460  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 12:12:52.113028  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 12:12:52.123582  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 12:12:52.127631  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 12:12:52.137793  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 12:12:52.141990  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 12:12:52.152553  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 12:12:52.156755  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 12:12:52.167863  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:12:52.193373  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:12:52.217491  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:12:52.242998  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:12:52.268943  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 12:12:52.295572  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:12:52.322283  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:12:52.349270  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:12:52.378251  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:12:52.404955  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:12:52.430391  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:12:52.454957  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 12:12:52.472014  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0812 12:12:52.488709  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 12:12:52.507654  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 12:12:52.526989  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 12:12:52.546197  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 12:12:52.564842  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 12:12:52.581674  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:12:52.587894  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:12:52.598823  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.603491  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.603546  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.609588  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:12:52.620220  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:12:52.630859  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.635387  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.635454  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.641135  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:12:52.652479  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:12:52.663804  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.668546  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.668604  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.674245  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:12:52.685130  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:12:52.689571  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:12:52.689644  485208 kubeadm.go:934] updating node {m02 192.168.39.215 8443 v1.30.3 crio true true} ...
	I0812 12:12:52.689755  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:12:52.689778  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:12:52.689811  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:12:52.707169  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:12:52.707253  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:12:52.707327  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:52.717451  485208 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 12:12:52.717533  485208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:52.727228  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 12:12:52.727257  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:12:52.727352  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:12:52.727377  485208 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0812 12:12:52.727350  485208 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0812 12:12:52.731639  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 12:12:52.731666  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 12:13:24.584364  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:13:24.584507  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:13:24.590913  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 12:13:24.590951  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 12:13:59.119046  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:13:59.135633  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:13:59.135772  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:13:59.141241  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 12:13:59.141281  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 12:13:59.572488  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 12:13:59.582780  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 12:13:59.600764  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:13:59.619020  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:13:59.636410  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:13:59.641212  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:13:59.654356  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:13:59.765868  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:13:59.783638  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:13:59.784000  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:13:59.784028  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:13:59.801445  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0812 12:13:59.802040  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:13:59.802584  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:13:59.802607  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:13:59.803018  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:13:59.803232  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:13:59.803394  485208 start.go:317] joinCluster: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:13:59.803498  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 12:13:59.803514  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:13:59.806558  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:13:59.806974  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:13:59.807004  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:13:59.807223  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:13:59.807396  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:13:59.807587  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:13:59.807773  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:13:59.971957  485208 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:13:59.972017  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dgsrck.rcblur08bhwjdf3e --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m02 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443"
	I0812 12:14:22.462475  485208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dgsrck.rcblur08bhwjdf3e --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m02 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443": (22.490400346s)
	I0812 12:14:22.462526  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 12:14:23.084681  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134-m02 minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=false
	I0812 12:14:23.223450  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220134-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 12:14:23.337117  485208 start.go:319] duration metric: took 23.533715173s to joinCluster
	I0812 12:14:23.337212  485208 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:14:23.337568  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:14:23.338740  485208 out.go:177] * Verifying Kubernetes components...
	I0812 12:14:23.340085  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:14:23.582801  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:14:23.617323  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:14:23.617691  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 12:14:23.617787  485208 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.228:8443
	I0812 12:14:23.618108  485208 node_ready.go:35] waiting up to 6m0s for node "ha-220134-m02" to be "Ready" ...
	I0812 12:14:23.618245  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:23.618256  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:23.618272  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:23.618280  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:23.631756  485208 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0812 12:14:24.118359  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:24.118394  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:24.118406  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:24.118411  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:24.122009  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:24.619036  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:24.619059  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:24.619073  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:24.619078  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:24.622653  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.119070  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:25.119097  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:25.119106  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:25.119111  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:25.122712  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.618850  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:25.618881  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:25.618893  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:25.618899  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:25.622198  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.622660  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:26.119272  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:26.119296  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:26.119305  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:26.119309  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:26.123464  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:26.618980  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:26.619005  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:26.619014  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:26.619019  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:26.624758  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:27.118647  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:27.118672  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:27.118680  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:27.118684  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:27.122668  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:27.618399  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:27.618427  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:27.618437  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:27.618441  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:27.621763  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:28.118726  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:28.118750  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:28.118759  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:28.118763  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:28.122740  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:28.123486  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:28.618568  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:28.618598  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:28.618609  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:28.618613  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:28.622521  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:29.118432  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:29.118460  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:29.118469  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:29.118474  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:29.122424  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:29.618631  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:29.618659  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:29.618671  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:29.618679  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:29.622870  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:30.118349  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:30.118371  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:30.118380  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:30.118385  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:30.125291  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:14:30.126391  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:30.618803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:30.618828  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:30.618836  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:30.618840  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:30.622124  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:31.118776  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:31.118800  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:31.118808  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:31.118814  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:31.122290  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:31.618666  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:31.618692  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:31.618700  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:31.618704  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:31.622778  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:32.118883  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:32.118908  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:32.118917  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:32.118921  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:32.122690  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:32.618565  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:32.618592  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:32.618602  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:32.618610  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:32.624694  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:14:32.625151  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:33.118572  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:33.118598  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:33.118607  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:33.118611  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:33.122028  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:33.618591  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:33.618614  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:33.618624  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:33.618629  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:33.622009  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:34.118480  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:34.118506  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:34.118515  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:34.118518  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:34.122091  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:34.619213  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:34.619239  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:34.619248  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:34.619252  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:34.623144  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:35.118522  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:35.118556  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:35.118567  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:35.118574  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:35.123720  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:35.124464  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:35.619021  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:35.619051  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:35.619062  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:35.619069  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:35.624416  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:36.118357  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:36.118380  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:36.118391  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:36.118394  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:36.121777  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:36.618331  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:36.618355  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:36.618364  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:36.618369  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:36.622317  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.118590  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:37.118616  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:37.118623  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:37.118628  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:37.122219  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.618339  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:37.618366  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:37.618374  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:37.618377  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:37.622282  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.623098  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:38.118359  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:38.118389  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:38.118399  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:38.118405  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:38.122266  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:38.618381  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:38.618407  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:38.618415  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:38.618420  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:38.622698  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:39.118852  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:39.118878  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:39.118887  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:39.118891  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:39.122666  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:39.618864  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:39.618898  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:39.618908  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:39.618915  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:39.622125  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:40.118768  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:40.118800  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:40.118817  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:40.118823  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:40.122317  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:40.122824  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:40.619331  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:40.619362  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:40.619375  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:40.619380  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:40.622748  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:41.118780  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:41.118810  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:41.118821  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:41.118829  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:41.122815  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:41.618552  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:41.618579  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:41.618589  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:41.618597  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:41.622268  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.118438  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.118474  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.118485  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.118492  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.121817  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.122503  485208 node_ready.go:49] node "ha-220134-m02" has status "Ready":"True"
	I0812 12:14:42.122528  485208 node_ready.go:38] duration metric: took 18.504397722s for node "ha-220134-m02" to be "Ready" ...
	I0812 12:14:42.122542  485208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:14:42.122634  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:42.122649  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.122660  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.122670  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.127753  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:42.134490  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.134615  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mtqtk
	I0812 12:14:42.134629  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.134640  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.134646  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.137835  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.138797  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.138816  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.138826  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.138832  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.141438  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.142011  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.142026  485208 pod_ready.go:81] duration metric: took 7.499039ms for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.142038  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.142104  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t8pg7
	I0812 12:14:42.142113  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.142120  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.142124  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.145303  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.146138  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.146160  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.146170  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.146176  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.150866  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:42.151425  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.151447  485208 pod_ready.go:81] duration metric: took 9.399509ms for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.151457  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.151518  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134
	I0812 12:14:42.151527  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.151534  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.151537  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.154655  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.155164  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.155180  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.155187  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.155191  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.157554  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.158099  485208 pod_ready.go:92] pod "etcd-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.158119  485208 pod_ready.go:81] duration metric: took 6.655004ms for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.158131  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.158256  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m02
	I0812 12:14:42.158269  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.158277  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.158282  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.160828  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.161508  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.161528  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.161538  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.161545  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.164082  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.164525  485208 pod_ready.go:92] pod "etcd-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.164552  485208 pod_ready.go:81] duration metric: took 6.412866ms for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.164575  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.319083  485208 request.go:629] Waited for 154.40374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:14:42.319182  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:14:42.319190  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.319205  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.319214  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.322923  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.519186  485208 request.go:629] Waited for 195.458039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.519273  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.519279  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.519286  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.519290  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.522936  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.523668  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.523700  485208 pod_ready.go:81] duration metric: took 359.109868ms for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.523714  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.718822  485208 request.go:629] Waited for 195.000146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:14:42.718905  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:14:42.718911  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.718920  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.718929  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.722637  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.918759  485208 request.go:629] Waited for 195.425883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.918827  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.918835  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.918843  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.918849  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.922131  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.922847  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.922869  485208 pod_ready.go:81] duration metric: took 399.143428ms for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.922881  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.119019  485208 request.go:629] Waited for 196.034578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:14:43.119100  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:14:43.119108  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.119120  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.119132  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.123174  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:43.319490  485208 request.go:629] Waited for 195.267129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:43.319565  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:43.319574  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.319582  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.319589  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.322678  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.323332  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:43.323359  485208 pod_ready.go:81] duration metric: took 400.471136ms for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.323370  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.519331  485208 request.go:629] Waited for 195.852908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:14:43.519430  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:14:43.519442  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.519452  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.519460  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.523238  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.719361  485208 request.go:629] Waited for 195.430203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:43.719464  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:43.719470  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.719477  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.719482  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.723467  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.723958  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:43.723977  485208 pod_ready.go:81] duration metric: took 400.601195ms for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.723987  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.919229  485208 request.go:629] Waited for 195.141841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:14:43.919322  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:14:43.919330  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.919342  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.919352  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.922963  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.119062  485208 request.go:629] Waited for 195.406086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:44.119151  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:44.119159  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.119178  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.119201  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.122989  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.123805  485208 pod_ready.go:92] pod "kube-proxy-bs72f" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.123824  485208 pod_ready.go:81] duration metric: took 399.831421ms for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.123834  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.318966  485208 request.go:629] Waited for 195.049756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:14:44.319089  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:14:44.319102  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.319112  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.319123  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.322640  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.518604  485208 request.go:629] Waited for 195.303631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.518675  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.518681  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.518694  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.518701  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.522291  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.522947  485208 pod_ready.go:92] pod "kube-proxy-zcgh8" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.522970  485208 pod_ready.go:81] duration metric: took 399.128934ms for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.522985  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.718944  485208 request.go:629] Waited for 195.868915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:14:44.719010  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:14:44.719016  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.719028  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.719035  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.722951  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.919343  485208 request.go:629] Waited for 195.527273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.919418  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.919425  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.919433  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.919437  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.924372  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:44.924861  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.924882  485208 pod_ready.go:81] duration metric: took 401.890241ms for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.924891  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:45.118952  485208 request.go:629] Waited for 193.966343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:14:45.119032  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:14:45.119037  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.119051  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.119056  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.122400  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:45.319159  485208 request.go:629] Waited for 196.184111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:45.319258  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:45.319267  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.319279  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.319291  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.332040  485208 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0812 12:14:45.332524  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:45.332557  485208 pod_ready.go:81] duration metric: took 407.658979ms for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:45.332568  485208 pod_ready.go:38] duration metric: took 3.210007717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:14:45.332589  485208 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:14:45.332653  485208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:14:45.353986  485208 api_server.go:72] duration metric: took 22.016705095s to wait for apiserver process to appear ...
	I0812 12:14:45.354022  485208 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:14:45.354051  485208 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0812 12:14:45.366431  485208 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0812 12:14:45.366536  485208 round_trippers.go:463] GET https://192.168.39.228:8443/version
	I0812 12:14:45.366546  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.366558  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.366568  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.367697  485208 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0812 12:14:45.367860  485208 api_server.go:141] control plane version: v1.30.3
	I0812 12:14:45.367885  485208 api_server.go:131] duration metric: took 13.854938ms to wait for apiserver health ...
	I0812 12:14:45.367894  485208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:14:45.519121  485208 request.go:629] Waited for 151.144135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.519186  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.519191  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.519205  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.519210  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.524373  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:45.529010  485208 system_pods.go:59] 17 kube-system pods found
	I0812 12:14:45.529045  485208 system_pods.go:61] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:14:45.529051  485208 system_pods.go:61] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:14:45.529055  485208 system_pods.go:61] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:14:45.529058  485208 system_pods.go:61] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:14:45.529061  485208 system_pods.go:61] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:14:45.529065  485208 system_pods.go:61] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:14:45.529068  485208 system_pods.go:61] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:14:45.529071  485208 system_pods.go:61] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:14:45.529076  485208 system_pods.go:61] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:14:45.529090  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:14:45.529098  485208 system_pods.go:61] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:14:45.529103  485208 system_pods.go:61] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:14:45.529112  485208 system_pods.go:61] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:14:45.529117  485208 system_pods.go:61] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:14:45.529124  485208 system_pods.go:61] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:14:45.529129  485208 system_pods.go:61] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:14:45.529133  485208 system_pods.go:61] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:14:45.529139  485208 system_pods.go:74] duration metric: took 161.238707ms to wait for pod list to return data ...
	I0812 12:14:45.529150  485208 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:14:45.718564  485208 request.go:629] Waited for 189.321436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:14:45.718696  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:14:45.718708  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.718716  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.718722  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.722424  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:45.722678  485208 default_sa.go:45] found service account: "default"
	I0812 12:14:45.722695  485208 default_sa.go:55] duration metric: took 193.536981ms for default service account to be created ...
	I0812 12:14:45.722704  485208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:14:45.919028  485208 request.go:629] Waited for 196.232627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.919104  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.919112  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.919122  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.919131  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.928358  485208 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 12:14:45.933300  485208 system_pods.go:86] 17 kube-system pods found
	I0812 12:14:45.933332  485208 system_pods.go:89] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:14:45.933338  485208 system_pods.go:89] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:14:45.933342  485208 system_pods.go:89] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:14:45.933346  485208 system_pods.go:89] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:14:45.933350  485208 system_pods.go:89] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:14:45.933355  485208 system_pods.go:89] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:14:45.933359  485208 system_pods.go:89] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:14:45.933363  485208 system_pods.go:89] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:14:45.933367  485208 system_pods.go:89] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:14:45.933371  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:14:45.933375  485208 system_pods.go:89] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:14:45.933378  485208 system_pods.go:89] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:14:45.933382  485208 system_pods.go:89] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:14:45.933387  485208 system_pods.go:89] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:14:45.933391  485208 system_pods.go:89] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:14:45.933394  485208 system_pods.go:89] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:14:45.933398  485208 system_pods.go:89] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:14:45.933405  485208 system_pods.go:126] duration metric: took 210.695106ms to wait for k8s-apps to be running ...
	I0812 12:14:45.933414  485208 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:14:45.933465  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:14:45.952301  485208 system_svc.go:56] duration metric: took 18.873436ms WaitForService to wait for kubelet
	I0812 12:14:45.952333  485208 kubeadm.go:582] duration metric: took 22.615059023s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:14:45.952354  485208 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:14:46.118844  485208 request.go:629] Waited for 166.394903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes
	I0812 12:14:46.118934  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes
	I0812 12:14:46.118939  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:46.118947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:46.118952  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:46.122551  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:46.123303  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:14:46.123344  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:14:46.123380  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:14:46.123387  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:14:46.123398  485208 node_conditions.go:105] duration metric: took 171.038039ms to run NodePressure ...
	I0812 12:14:46.123418  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:14:46.123468  485208 start.go:255] writing updated cluster config ...
	I0812 12:14:46.125754  485208 out.go:177] 
	I0812 12:14:46.127730  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:14:46.127883  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:14:46.129594  485208 out.go:177] * Starting "ha-220134-m03" control-plane node in "ha-220134" cluster
	I0812 12:14:46.131036  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:14:46.131075  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:14:46.131203  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:14:46.131219  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:14:46.131350  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:14:46.132144  485208 start.go:360] acquireMachinesLock for ha-220134-m03: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:14:46.132212  485208 start.go:364] duration metric: took 32.881µs to acquireMachinesLock for "ha-220134-m03"
	I0812 12:14:46.132232  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:14:46.132423  485208 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0812 12:14:46.134079  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:14:46.134186  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:14:46.134227  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:14:46.150478  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0812 12:14:46.150898  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:14:46.151443  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:14:46.151469  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:14:46.151813  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:14:46.152048  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:14:46.152271  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:14:46.152435  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:14:46.152466  485208 client.go:168] LocalClient.Create starting
	I0812 12:14:46.152506  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:14:46.152558  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:14:46.152581  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:14:46.152654  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:14:46.152682  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:14:46.152698  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:14:46.152723  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:14:46.152735  485208 main.go:141] libmachine: (ha-220134-m03) Calling .PreCreateCheck
	I0812 12:14:46.152913  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:14:46.153323  485208 main.go:141] libmachine: Creating machine...
	I0812 12:14:46.153339  485208 main.go:141] libmachine: (ha-220134-m03) Calling .Create
	I0812 12:14:46.153465  485208 main.go:141] libmachine: (ha-220134-m03) Creating KVM machine...
	I0812 12:14:46.154675  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found existing default KVM network
	I0812 12:14:46.154783  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found existing private KVM network mk-ha-220134
	I0812 12:14:46.154915  485208 main.go:141] libmachine: (ha-220134-m03) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 ...
	I0812 12:14:46.154931  485208 main.go:141] libmachine: (ha-220134-m03) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:14:46.154994  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.154920  486241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:14:46.155107  485208 main.go:141] libmachine: (ha-220134-m03) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:14:46.441625  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.441499  486241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa...
	I0812 12:14:46.630286  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.630122  486241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/ha-220134-m03.rawdisk...
	I0812 12:14:46.630322  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Writing magic tar header
	I0812 12:14:46.630337  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Writing SSH key tar header
	I0812 12:14:46.630352  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.630263  486241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 ...
	I0812 12:14:46.630537  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03
	I0812 12:14:46.630663  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 (perms=drwx------)
	I0812 12:14:46.630680  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:14:46.630701  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:14:46.630710  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:14:46.630720  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:14:46.630747  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:14:46.630765  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:14:46.630778  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home
	I0812 12:14:46.630785  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Skipping /home - not owner
	I0812 12:14:46.630801  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:14:46.630810  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:14:46.630821  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:14:46.630830  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:14:46.630838  485208 main.go:141] libmachine: (ha-220134-m03) Creating domain...
	I0812 12:14:46.631887  485208 main.go:141] libmachine: (ha-220134-m03) define libvirt domain using xml: 
	I0812 12:14:46.631906  485208 main.go:141] libmachine: (ha-220134-m03) <domain type='kvm'>
	I0812 12:14:46.631917  485208 main.go:141] libmachine: (ha-220134-m03)   <name>ha-220134-m03</name>
	I0812 12:14:46.631925  485208 main.go:141] libmachine: (ha-220134-m03)   <memory unit='MiB'>2200</memory>
	I0812 12:14:46.631932  485208 main.go:141] libmachine: (ha-220134-m03)   <vcpu>2</vcpu>
	I0812 12:14:46.631939  485208 main.go:141] libmachine: (ha-220134-m03)   <features>
	I0812 12:14:46.631948  485208 main.go:141] libmachine: (ha-220134-m03)     <acpi/>
	I0812 12:14:46.631956  485208 main.go:141] libmachine: (ha-220134-m03)     <apic/>
	I0812 12:14:46.631966  485208 main.go:141] libmachine: (ha-220134-m03)     <pae/>
	I0812 12:14:46.631976  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632009  485208 main.go:141] libmachine: (ha-220134-m03)   </features>
	I0812 12:14:46.632034  485208 main.go:141] libmachine: (ha-220134-m03)   <cpu mode='host-passthrough'>
	I0812 12:14:46.632059  485208 main.go:141] libmachine: (ha-220134-m03)   
	I0812 12:14:46.632084  485208 main.go:141] libmachine: (ha-220134-m03)   </cpu>
	I0812 12:14:46.632094  485208 main.go:141] libmachine: (ha-220134-m03)   <os>
	I0812 12:14:46.632101  485208 main.go:141] libmachine: (ha-220134-m03)     <type>hvm</type>
	I0812 12:14:46.632109  485208 main.go:141] libmachine: (ha-220134-m03)     <boot dev='cdrom'/>
	I0812 12:14:46.632114  485208 main.go:141] libmachine: (ha-220134-m03)     <boot dev='hd'/>
	I0812 12:14:46.632120  485208 main.go:141] libmachine: (ha-220134-m03)     <bootmenu enable='no'/>
	I0812 12:14:46.632127  485208 main.go:141] libmachine: (ha-220134-m03)   </os>
	I0812 12:14:46.632133  485208 main.go:141] libmachine: (ha-220134-m03)   <devices>
	I0812 12:14:46.632138  485208 main.go:141] libmachine: (ha-220134-m03)     <disk type='file' device='cdrom'>
	I0812 12:14:46.632146  485208 main.go:141] libmachine: (ha-220134-m03)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/boot2docker.iso'/>
	I0812 12:14:46.632158  485208 main.go:141] libmachine: (ha-220134-m03)       <target dev='hdc' bus='scsi'/>
	I0812 12:14:46.632167  485208 main.go:141] libmachine: (ha-220134-m03)       <readonly/>
	I0812 12:14:46.632177  485208 main.go:141] libmachine: (ha-220134-m03)     </disk>
	I0812 12:14:46.632186  485208 main.go:141] libmachine: (ha-220134-m03)     <disk type='file' device='disk'>
	I0812 12:14:46.632197  485208 main.go:141] libmachine: (ha-220134-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:14:46.632208  485208 main.go:141] libmachine: (ha-220134-m03)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/ha-220134-m03.rawdisk'/>
	I0812 12:14:46.632216  485208 main.go:141] libmachine: (ha-220134-m03)       <target dev='hda' bus='virtio'/>
	I0812 12:14:46.632224  485208 main.go:141] libmachine: (ha-220134-m03)     </disk>
	I0812 12:14:46.632229  485208 main.go:141] libmachine: (ha-220134-m03)     <interface type='network'>
	I0812 12:14:46.632237  485208 main.go:141] libmachine: (ha-220134-m03)       <source network='mk-ha-220134'/>
	I0812 12:14:46.632242  485208 main.go:141] libmachine: (ha-220134-m03)       <model type='virtio'/>
	I0812 12:14:46.632248  485208 main.go:141] libmachine: (ha-220134-m03)     </interface>
	I0812 12:14:46.632255  485208 main.go:141] libmachine: (ha-220134-m03)     <interface type='network'>
	I0812 12:14:46.632260  485208 main.go:141] libmachine: (ha-220134-m03)       <source network='default'/>
	I0812 12:14:46.632267  485208 main.go:141] libmachine: (ha-220134-m03)       <model type='virtio'/>
	I0812 12:14:46.632300  485208 main.go:141] libmachine: (ha-220134-m03)     </interface>
	I0812 12:14:46.632326  485208 main.go:141] libmachine: (ha-220134-m03)     <serial type='pty'>
	I0812 12:14:46.632336  485208 main.go:141] libmachine: (ha-220134-m03)       <target port='0'/>
	I0812 12:14:46.632345  485208 main.go:141] libmachine: (ha-220134-m03)     </serial>
	I0812 12:14:46.632354  485208 main.go:141] libmachine: (ha-220134-m03)     <console type='pty'>
	I0812 12:14:46.632364  485208 main.go:141] libmachine: (ha-220134-m03)       <target type='serial' port='0'/>
	I0812 12:14:46.632373  485208 main.go:141] libmachine: (ha-220134-m03)     </console>
	I0812 12:14:46.632383  485208 main.go:141] libmachine: (ha-220134-m03)     <rng model='virtio'>
	I0812 12:14:46.632393  485208 main.go:141] libmachine: (ha-220134-m03)       <backend model='random'>/dev/random</backend>
	I0812 12:14:46.632407  485208 main.go:141] libmachine: (ha-220134-m03)     </rng>
	I0812 12:14:46.632419  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632428  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632436  485208 main.go:141] libmachine: (ha-220134-m03)   </devices>
	I0812 12:14:46.632447  485208 main.go:141] libmachine: (ha-220134-m03) </domain>
	I0812 12:14:46.632461  485208 main.go:141] libmachine: (ha-220134-m03) 
	I0812 12:14:46.639821  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:44:47:08 in network default
	I0812 12:14:46.640512  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring networks are active...
	I0812 12:14:46.640535  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:46.641535  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring network default is active
	I0812 12:14:46.641898  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring network mk-ha-220134 is active
	I0812 12:14:46.642359  485208 main.go:141] libmachine: (ha-220134-m03) Getting domain xml...
	I0812 12:14:46.643166  485208 main.go:141] libmachine: (ha-220134-m03) Creating domain...
	I0812 12:14:47.884575  485208 main.go:141] libmachine: (ha-220134-m03) Waiting to get IP...
	I0812 12:14:47.885445  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:47.885899  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:47.885971  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:47.885924  486241 retry.go:31] will retry after 188.796368ms: waiting for machine to come up
	I0812 12:14:48.076663  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.077201  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.077238  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.077133  486241 retry.go:31] will retry after 370.309742ms: waiting for machine to come up
	I0812 12:14:48.448719  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.449208  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.449238  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.449178  486241 retry.go:31] will retry after 362.104049ms: waiting for machine to come up
	I0812 12:14:48.812749  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.813248  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.813277  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.813192  486241 retry.go:31] will retry after 420.630348ms: waiting for machine to come up
	I0812 12:14:49.236077  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:49.236649  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:49.236689  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:49.236595  486241 retry.go:31] will retry after 508.154573ms: waiting for machine to come up
	I0812 12:14:49.746293  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:49.746809  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:49.746841  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:49.746748  486241 retry.go:31] will retry after 838.157149ms: waiting for machine to come up
	I0812 12:14:50.586377  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:50.586929  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:50.586961  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:50.586882  486241 retry.go:31] will retry after 851.729786ms: waiting for machine to come up
	I0812 12:14:51.440568  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:51.441091  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:51.441130  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:51.441032  486241 retry.go:31] will retry after 1.010425115s: waiting for machine to come up
	I0812 12:14:52.452738  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:52.453261  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:52.453294  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:52.453174  486241 retry.go:31] will retry after 1.424809996s: waiting for machine to come up
	I0812 12:14:53.879589  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:53.880112  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:53.880146  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:53.880052  486241 retry.go:31] will retry after 1.51155576s: waiting for machine to come up
	I0812 12:14:55.393922  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:55.394399  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:55.394433  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:55.394321  486241 retry.go:31] will retry after 2.74908064s: waiting for machine to come up
	I0812 12:14:58.144733  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:58.145236  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:58.145269  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:58.145177  486241 retry.go:31] will retry after 3.0862077s: waiting for machine to come up
	I0812 12:15:01.233615  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:01.234213  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:15:01.234247  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:15:01.234160  486241 retry.go:31] will retry after 3.24342849s: waiting for machine to come up
	I0812 12:15:04.480919  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:04.481316  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:15:04.481346  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:15:04.481266  486241 retry.go:31] will retry after 4.361114987s: waiting for machine to come up
	I0812 12:15:08.844313  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.845037  485208 main.go:141] libmachine: (ha-220134-m03) Found IP for machine: 192.168.39.186
	I0812 12:15:08.845075  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has current primary IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.845107  485208 main.go:141] libmachine: (ha-220134-m03) Reserving static IP address...
	I0812 12:15:08.845427  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find host DHCP lease matching {name: "ha-220134-m03", mac: "52:54:00:dc:00:32", ip: "192.168.39.186"} in network mk-ha-220134
	I0812 12:15:08.928064  485208 main.go:141] libmachine: (ha-220134-m03) Reserved static IP address: 192.168.39.186
	I0812 12:15:08.928112  485208 main.go:141] libmachine: (ha-220134-m03) Waiting for SSH to be available...
	I0812 12:15:08.928125  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Getting to WaitForSSH function...
	I0812 12:15:08.931087  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.931624  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:08.931659  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.931857  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using SSH client type: external
	I0812 12:15:08.931886  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa (-rw-------)
	I0812 12:15:08.931919  485208 main.go:141] libmachine: (ha-220134-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:15:08.931934  485208 main.go:141] libmachine: (ha-220134-m03) DBG | About to run SSH command:
	I0812 12:15:08.931947  485208 main.go:141] libmachine: (ha-220134-m03) DBG | exit 0
	I0812 12:15:09.057066  485208 main.go:141] libmachine: (ha-220134-m03) DBG | SSH cmd err, output: <nil>: 
	I0812 12:15:09.057378  485208 main.go:141] libmachine: (ha-220134-m03) KVM machine creation complete!
	I0812 12:15:09.057743  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:15:09.058284  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:09.058473  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:09.058639  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:15:09.058655  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:15:09.060036  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:15:09.060052  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:15:09.060057  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:15:09.060063  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.062560  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.062955  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.062984  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.063145  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.063299  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.063487  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.063662  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.063832  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.064051  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.064063  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:15:09.172538  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:15:09.172569  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:15:09.172578  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.175739  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.176177  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.176205  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.176341  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.176640  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.176853  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.177009  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.177253  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.177425  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.177439  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:15:09.286107  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:15:09.286183  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:15:09.286194  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:15:09.286205  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.286489  485208 buildroot.go:166] provisioning hostname "ha-220134-m03"
	I0812 12:15:09.286529  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.286740  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.289861  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.290324  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.290361  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.290544  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.290733  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.290906  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.291084  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.291256  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.291475  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.291493  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134-m03 && echo "ha-220134-m03" | sudo tee /etc/hostname
	I0812 12:15:09.418898  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134-m03
	
	I0812 12:15:09.418933  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.422111  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.422527  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.422558  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.422768  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.422987  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.423189  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.423343  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.423523  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.423716  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.423733  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:15:09.543765  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:15:09.543804  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:15:09.543822  485208 buildroot.go:174] setting up certificates
	I0812 12:15:09.543833  485208 provision.go:84] configureAuth start
	I0812 12:15:09.543846  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.544164  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:09.547578  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.548065  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.548097  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.548368  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.550642  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.550993  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.551016  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.551197  485208 provision.go:143] copyHostCerts
	I0812 12:15:09.551247  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:15:09.551302  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:15:09.551311  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:15:09.551379  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:15:09.551464  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:15:09.551481  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:15:09.551488  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:15:09.551514  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:15:09.551562  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:15:09.551578  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:15:09.551585  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:15:09.551605  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:15:09.551664  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134-m03 san=[127.0.0.1 192.168.39.186 ha-220134-m03 localhost minikube]
	I0812 12:15:09.691269  485208 provision.go:177] copyRemoteCerts
	I0812 12:15:09.691330  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:15:09.691356  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.694292  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.694610  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.694644  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.694805  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.695006  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.695179  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.695319  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:09.779238  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:15:09.779324  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:15:09.806470  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:15:09.806562  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:15:09.833996  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:15:09.834076  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 12:15:09.861148  485208 provision.go:87] duration metric: took 317.299651ms to configureAuth
	I0812 12:15:09.861193  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:15:09.861496  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:09.861609  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.864409  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.864927  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.864959  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.865158  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.865374  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.865604  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.865775  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.865984  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.866162  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.866177  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:15:10.141905  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:15:10.141948  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:15:10.141961  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetURL
	I0812 12:15:10.143339  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using libvirt version 6000000
	I0812 12:15:10.145583  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.146035  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.146072  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.146240  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:15:10.146253  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:15:10.146261  485208 client.go:171] duration metric: took 23.993783736s to LocalClient.Create
	I0812 12:15:10.146288  485208 start.go:167] duration metric: took 23.993850825s to libmachine.API.Create "ha-220134"
	I0812 12:15:10.146299  485208 start.go:293] postStartSetup for "ha-220134-m03" (driver="kvm2")
	I0812 12:15:10.146313  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:15:10.146328  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.146603  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:15:10.146623  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.148993  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.149438  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.149468  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.149645  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.149838  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.150034  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.150210  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.236302  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:15:10.240755  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:15:10.240788  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:15:10.240866  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:15:10.240937  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:15:10.240946  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:15:10.241026  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:15:10.251073  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:15:10.275608  485208 start.go:296] duration metric: took 129.289194ms for postStartSetup
	I0812 12:15:10.275664  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:15:10.276276  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:10.278912  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.279215  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.279241  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.279538  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:15:10.279739  485208 start.go:128] duration metric: took 24.147300324s to createHost
	I0812 12:15:10.279767  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.282242  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.282621  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.282650  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.282773  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.282972  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.283203  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.283338  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.283491  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:10.283666  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:10.283677  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:15:10.394572  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464910.368454015
	
	I0812 12:15:10.394602  485208 fix.go:216] guest clock: 1723464910.368454015
	I0812 12:15:10.394612  485208 fix.go:229] Guest: 2024-08-12 12:15:10.368454015 +0000 UTC Remote: 2024-08-12 12:15:10.27975226 +0000 UTC m=+217.130327126 (delta=88.701755ms)
	I0812 12:15:10.394636  485208 fix.go:200] guest clock delta is within tolerance: 88.701755ms
	I0812 12:15:10.394644  485208 start.go:83] releasing machines lock for "ha-220134-m03", held for 24.262422311s
	I0812 12:15:10.394667  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.394980  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:10.398332  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.398786  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.398815  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.400828  485208 out.go:177] * Found network options:
	I0812 12:15:10.402285  485208 out.go:177]   - NO_PROXY=192.168.39.228,192.168.39.215
	W0812 12:15:10.403549  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 12:15:10.403572  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:15:10.403589  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404254  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404527  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404655  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:15:10.404698  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	W0812 12:15:10.404774  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 12:15:10.404807  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:15:10.404884  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:15:10.404908  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.407557  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.407768  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408059  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.408082  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408402  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.408427  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408436  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.408663  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.408729  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.408857  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.408887  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.409066  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.409074  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.409239  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.649138  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:15:10.656231  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:15:10.656313  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:15:10.673736  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:15:10.673761  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:15:10.673825  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:15:10.691199  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:15:10.706610  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:15:10.706682  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:15:10.721355  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:15:10.737340  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:15:10.867875  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:15:11.034902  485208 docker.go:233] disabling docker service ...
	I0812 12:15:11.034999  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:15:11.058103  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:15:11.074000  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:15:11.216608  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:15:11.342608  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:15:11.359897  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:15:11.380642  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:15:11.380708  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.391300  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:15:11.391378  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.403641  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.415329  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.426601  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:15:11.437779  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.449221  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.467114  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.478693  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:15:11.488264  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:15:11.488342  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:15:11.502327  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:15:11.513785  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:11.641677  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:15:11.791705  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:15:11.791792  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:15:11.796976  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:15:11.797059  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:15:11.801905  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:15:11.849014  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:15:11.849135  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:15:11.881023  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:15:11.915071  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:15:11.916784  485208 out.go:177]   - env NO_PROXY=192.168.39.228
	I0812 12:15:11.918466  485208 out.go:177]   - env NO_PROXY=192.168.39.228,192.168.39.215
	I0812 12:15:11.919870  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:11.922787  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:11.923224  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:11.923256  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:11.923524  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:15:11.928325  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:15:11.942473  485208 mustload.go:65] Loading cluster: ha-220134
	I0812 12:15:11.942789  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:11.943051  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:11.943092  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:11.959670  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0812 12:15:11.960163  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:11.960708  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:11.960735  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:11.961123  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:11.961415  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:15:11.963550  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:15:11.963855  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:11.963895  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:11.979646  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0812 12:15:11.980156  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:11.980701  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:11.980731  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:11.981028  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:11.981258  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:15:11.981458  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.186
	I0812 12:15:11.981470  485208 certs.go:194] generating shared ca certs ...
	I0812 12:15:11.981495  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:11.981642  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:15:11.981731  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:15:11.981746  485208 certs.go:256] generating profile certs ...
	I0812 12:15:11.981855  485208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:15:11.981894  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a
	I0812 12:15:11.981912  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.186 192.168.39.254]
	I0812 12:15:12.248323  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a ...
	I0812 12:15:12.248383  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a: {Name:mkb3073f2fe8aabdbf88fa505342e41968793922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:12.248639  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a ...
	I0812 12:15:12.248663  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a: {Name:mkd338db6afdce959177496d1622a16e570568c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:12.248814  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:15:12.248993  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:15:12.249224  485208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:15:12.249252  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:15:12.249276  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:15:12.249295  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:15:12.249310  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:15:12.249326  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:15:12.249341  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:15:12.249356  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:15:12.249371  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:15:12.249439  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:15:12.249476  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:15:12.249487  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:15:12.249514  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:15:12.249539  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:15:12.249564  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:15:12.249607  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:15:12.249636  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.249654  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.249669  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.249712  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:15:12.252718  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:12.253161  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:15:12.253195  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:12.253310  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:15:12.253538  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:15:12.253733  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:15:12.253905  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:15:12.325509  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 12:15:12.331478  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 12:15:12.346358  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 12:15:12.351843  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0812 12:15:12.365038  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 12:15:12.370394  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 12:15:12.382010  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 12:15:12.387090  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 12:15:12.410634  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 12:15:12.415394  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 12:15:12.427945  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 12:15:12.434870  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 12:15:12.448851  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:15:12.475774  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:15:12.501005  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:15:12.527382  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:15:12.553839  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0812 12:15:12.580109  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:15:12.607717  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:15:12.637578  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:15:12.665723  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:15:12.692019  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:15:12.718643  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:15:12.744580  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 12:15:12.763993  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0812 12:15:12.782714  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 12:15:12.801248  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 12:15:12.820166  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 12:15:12.840028  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 12:15:12.859427  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 12:15:12.880244  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:15:12.886878  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:15:12.899584  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.904818  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.904897  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.911261  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:15:12.926317  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:15:12.938933  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.943838  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.943920  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.951418  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:15:12.963787  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:15:12.975929  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.980630  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.980709  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.986747  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:15:12.999324  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:15:13.003797  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:15:13.003869  485208 kubeadm.go:934] updating node {m03 192.168.39.186 8443 v1.30.3 crio true true} ...
	I0812 12:15:13.003968  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:15:13.003997  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:15:13.004040  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:15:13.023415  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:15:13.023501  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:15:13.023589  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:15:13.035641  485208 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 12:15:13.035737  485208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 12:15:13.046627  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 12:15:13.046655  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:15:13.046671  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0812 12:15:13.046729  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:15:13.046779  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0812 12:15:13.046732  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:15:13.046814  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:15:13.046962  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:15:13.068415  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:15:13.068480  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 12:15:13.068516  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 12:15:13.068547  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:15:13.068548  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 12:15:13.068576  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 12:15:13.107318  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 12:15:13.107372  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 12:15:14.114893  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 12:15:14.124892  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 12:15:14.142699  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:15:14.161029  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:15:14.178890  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:15:14.183190  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:15:14.196092  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:14.328714  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:15:14.355760  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:15:14.356204  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:14.356262  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:14.375497  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0812 12:15:14.375963  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:14.376483  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:14.376509  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:14.376927  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:14.377194  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:15:14.377395  485208 start.go:317] joinCluster: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:15:14.377584  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 12:15:14.377630  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:15:14.380463  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:14.380977  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:15:14.381012  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:14.381206  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:15:14.381389  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:15:14.381565  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:15:14.381745  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:15:14.542735  485208 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:15:14.542788  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mqn6sp.73kz8b8xaiyk1wfd --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I0812 12:15:38.133587  485208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mqn6sp.73kz8b8xaiyk1wfd --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (23.590763313s)
	I0812 12:15:38.133628  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 12:15:38.739268  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134-m03 minikube.k8s.io/updated_at=2024_08_12T12_15_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=false
	I0812 12:15:38.870523  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220134-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 12:15:38.983247  485208 start.go:319] duration metric: took 24.605848322s to joinCluster
	I0812 12:15:38.983347  485208 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:15:38.983739  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:38.984770  485208 out.go:177] * Verifying Kubernetes components...
	I0812 12:15:38.986098  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:39.253258  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:15:39.316089  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:15:39.316442  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 12:15:39.316559  485208 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.228:8443
	I0812 12:15:39.316857  485208 node_ready.go:35] waiting up to 6m0s for node "ha-220134-m03" to be "Ready" ...
	I0812 12:15:39.316960  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:39.316974  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:39.316986  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:39.316995  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:39.342249  485208 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0812 12:15:39.817647  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:39.817675  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:39.817689  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:39.817692  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:39.821204  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:40.317902  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:40.317935  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:40.317947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:40.317953  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:40.338792  485208 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0812 12:15:40.817809  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:40.817837  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:40.817850  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:40.817855  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:40.824992  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:41.317426  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:41.317450  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:41.317459  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:41.317463  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:41.320763  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:41.321604  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:41.817471  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:41.817500  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:41.817512  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:41.817516  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:41.821210  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:42.317913  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:42.317936  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:42.317943  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:42.317947  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:42.324448  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:15:42.817303  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:42.817330  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:42.817340  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:42.817345  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:42.821361  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:43.317869  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:43.317901  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:43.317912  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:43.317920  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:43.321819  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:43.322582  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:43.817281  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:43.817305  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:43.817313  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:43.817317  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:43.821128  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:44.317960  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:44.317993  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:44.318005  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:44.318010  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:44.321622  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:44.817248  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:44.817274  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:44.817285  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:44.817291  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:44.821427  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:45.317125  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:45.317154  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:45.317164  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:45.317172  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:45.334629  485208 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0812 12:15:45.335223  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:45.817282  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:45.817310  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:45.817321  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:45.817326  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:45.821430  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:46.317976  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:46.318010  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:46.318024  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:46.318029  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:46.321889  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:46.817894  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:46.817923  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:46.817940  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:46.817946  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:46.821782  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.317153  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:47.317179  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:47.317191  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:47.317196  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:47.320769  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.817771  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:47.817797  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:47.817805  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:47.817809  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:47.821828  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.822695  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:48.318009  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:48.318035  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:48.318045  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:48.318057  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:48.321642  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:48.817283  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:48.817307  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:48.817318  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:48.817323  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:48.821163  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:49.317247  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:49.317273  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:49.317282  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:49.317287  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:49.320924  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:49.817610  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:49.817636  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:49.817646  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:49.817652  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:49.821736  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:49.822866  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:50.317833  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:50.317866  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:50.317878  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:50.317951  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:50.322169  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:50.817186  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:50.817212  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:50.817221  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:50.817225  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:50.821455  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:51.317855  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:51.317884  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:51.317894  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:51.317900  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:51.321625  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:51.818108  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:51.818148  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:51.818163  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:51.818171  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:51.822034  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:52.317152  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:52.317178  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:52.317187  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:52.317192  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:52.321179  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:52.321831  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:52.817189  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:52.817214  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:52.817223  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:52.817226  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:52.821188  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:53.317803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:53.317824  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:53.317833  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:53.317836  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:53.322305  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:53.818064  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:53.818088  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:53.818097  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:53.818101  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:53.825556  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:54.317918  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:54.317949  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:54.317963  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:54.317968  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:54.321283  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:54.322098  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:54.817992  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:54.818017  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:54.818025  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:54.818030  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:54.822041  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:55.317942  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:55.317965  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:55.317974  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:55.317979  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:55.321883  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:55.817294  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:55.817321  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:55.817332  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:55.817339  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:55.821366  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:56.318102  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:56.318126  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:56.318135  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:56.318139  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:56.321908  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:56.322456  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:56.817371  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:56.817395  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:56.817404  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:56.817408  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:56.821250  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.317301  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:57.317332  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.317341  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.317345  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.320868  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.817803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:57.817830  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.817842  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.817848  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.828061  485208 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0812 12:15:57.828760  485208 node_ready.go:49] node "ha-220134-m03" has status "Ready":"True"
	I0812 12:15:57.828796  485208 node_ready.go:38] duration metric: took 18.511915198s for node "ha-220134-m03" to be "Ready" ...
	I0812 12:15:57.828809  485208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:15:57.828904  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:15:57.828912  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.828922  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.828931  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.835389  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:15:57.844344  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.844483  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mtqtk
	I0812 12:15:57.844496  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.844507  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.844521  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.848032  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.848780  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.848802  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.848813  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.848820  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.856504  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:57.857209  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.857235  485208 pod_ready.go:81] duration metric: took 12.849573ms for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.857247  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.857333  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t8pg7
	I0812 12:15:57.857344  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.857354  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.857363  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.860657  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.861454  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.861474  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.861485  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.861490  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.864480  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.865233  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.865259  485208 pod_ready.go:81] duration metric: took 8.001039ms for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.865273  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.865347  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134
	I0812 12:15:57.865359  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.865369  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.865373  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.869318  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.870039  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.870059  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.870070  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.870077  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.872913  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.874165  485208 pod_ready.go:92] pod "etcd-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.874184  485208 pod_ready.go:81] duration metric: took 8.905178ms for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.874193  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.874248  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m02
	I0812 12:15:57.874255  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.874262  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.874270  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.877018  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.877677  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:57.877697  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.877708  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.877713  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.880246  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.880801  485208 pod_ready.go:92] pod "etcd-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.880823  485208 pod_ready.go:81] duration metric: took 6.623619ms for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.880832  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.018082  485208 request.go:629] Waited for 137.1761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m03
	I0812 12:15:58.018175  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m03
	I0812 12:15:58.018183  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.018191  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.018195  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.021679  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.218453  485208 request.go:629] Waited for 196.153729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:58.218517  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:58.218523  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.218534  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.218538  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.222280  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.222815  485208 pod_ready.go:92] pod "etcd-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:58.222838  485208 pod_ready.go:81] duration metric: took 341.999438ms for pod "etcd-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.222863  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.418603  485208 request.go:629] Waited for 195.632879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:15:58.418696  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:15:58.418706  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.418718  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.418727  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.422992  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:58.618130  485208 request.go:629] Waited for 194.402829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:58.618210  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:58.618218  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.618233  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.618251  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.622051  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.622714  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:58.622749  485208 pod_ready.go:81] duration metric: took 399.874745ms for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.622763  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.818786  485208 request.go:629] Waited for 195.861954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:15:58.818855  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:15:58.818864  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.818879  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.818888  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.822493  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.018527  485208 request.go:629] Waited for 195.364582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:59.018607  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:59.018612  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.018620  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.018624  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.022380  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.022926  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.022948  485208 pod_ready.go:81] duration metric: took 400.173977ms for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.022959  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.218119  485208 request.go:629] Waited for 195.069484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m03
	I0812 12:15:59.218229  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m03
	I0812 12:15:59.218249  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.218258  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.218262  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.222067  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.418056  485208 request.go:629] Waited for 195.153683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:59.418123  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:59.418128  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.418136  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.418142  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.421814  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.422444  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.422462  485208 pod_ready.go:81] duration metric: took 399.4962ms for pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.422473  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.618585  485208 request.go:629] Waited for 196.031623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:15:59.618684  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:15:59.618691  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.618703  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.618710  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.622949  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:59.818114  485208 request.go:629] Waited for 194.409087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:59.818181  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:59.818192  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.818201  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.818204  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.821893  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.822434  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.822459  485208 pod_ready.go:81] duration metric: took 399.976836ms for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.822479  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.017940  485208 request.go:629] Waited for 195.346209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:16:00.018029  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:16:00.018038  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.018046  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.018053  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.022105  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:00.218183  485208 request.go:629] Waited for 195.418276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:00.218257  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:00.218263  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.218270  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.218274  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.225132  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:16:00.225623  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:00.225646  485208 pod_ready.go:81] duration metric: took 403.159407ms for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.225657  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.418740  485208 request.go:629] Waited for 193.005776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m03
	I0812 12:16:00.418835  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m03
	I0812 12:16:00.418843  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.418854  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.418862  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.424349  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:16:00.618588  485208 request.go:629] Waited for 193.405723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:00.618677  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:00.618685  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.618696  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.618702  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.622672  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:00.623302  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:00.623337  485208 pod_ready.go:81] duration metric: took 397.673607ms for pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.623348  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.818404  485208 request.go:629] Waited for 194.974777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:16:00.818491  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:16:00.818497  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.818505  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.818511  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.822075  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.018322  485208 request.go:629] Waited for 195.38453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:01.018456  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:01.018468  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.018478  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.018486  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.023391  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:01.024001  485208 pod_ready.go:92] pod "kube-proxy-bs72f" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.024028  485208 pod_ready.go:81] duration metric: took 400.674392ms for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.024039  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-frf96" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.218585  485208 request.go:629] Waited for 194.46965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frf96
	I0812 12:16:01.218658  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frf96
	I0812 12:16:01.218664  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.218674  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.218682  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.222376  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.418194  485208 request.go:629] Waited for 193.424592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:01.418267  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:01.418272  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.418281  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.418285  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.422466  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:01.423030  485208 pod_ready.go:92] pod "kube-proxy-frf96" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.423056  485208 pod_ready.go:81] duration metric: took 399.011331ms for pod "kube-proxy-frf96" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.423074  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.618143  485208 request.go:629] Waited for 194.985445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:16:01.618222  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:16:01.618228  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.618239  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.618243  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.622216  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.818656  485208 request.go:629] Waited for 195.548171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:01.818725  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:01.818731  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.818738  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.818741  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.822308  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.823189  485208 pod_ready.go:92] pod "kube-proxy-zcgh8" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.823218  485208 pod_ready.go:81] duration metric: took 400.132968ms for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.823234  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.018381  485208 request.go:629] Waited for 195.050301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:16:02.018474  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:16:02.018482  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.018503  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.018527  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.022296  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:02.218293  485208 request.go:629] Waited for 195.40703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:02.218390  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:02.218395  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.218406  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.218419  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.222345  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:02.223079  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:02.223106  485208 pod_ready.go:81] duration metric: took 399.864213ms for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.223120  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.417959  485208 request.go:629] Waited for 194.725233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:16:02.418040  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:16:02.418047  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.418058  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.418067  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.422438  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:02.618516  485208 request.go:629] Waited for 195.439125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:02.618611  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:02.618619  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.618629  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.618636  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.622890  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:02.623630  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:02.623657  485208 pod_ready.go:81] duration metric: took 400.529786ms for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.623667  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.818619  485208 request.go:629] Waited for 194.850023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m03
	I0812 12:16:02.818691  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m03
	I0812 12:16:02.818697  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.818707  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.818721  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.822164  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.018730  485208 request.go:629] Waited for 195.397233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:03.018813  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:03.018822  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.018835  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.018852  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.022262  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.022736  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:03.022755  485208 pod_ready.go:81] duration metric: took 399.081346ms for pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:03.022766  485208 pod_ready.go:38] duration metric: took 5.193943384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:16:03.022782  485208 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:16:03.022838  485208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:16:03.039206  485208 api_server.go:72] duration metric: took 24.055813006s to wait for apiserver process to appear ...
	I0812 12:16:03.039235  485208 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:16:03.039255  485208 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0812 12:16:03.044029  485208 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0812 12:16:03.044126  485208 round_trippers.go:463] GET https://192.168.39.228:8443/version
	I0812 12:16:03.044138  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.044149  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.044158  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.045192  485208 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0812 12:16:03.045275  485208 api_server.go:141] control plane version: v1.30.3
	I0812 12:16:03.045294  485208 api_server.go:131] duration metric: took 6.052725ms to wait for apiserver health ...
	I0812 12:16:03.045304  485208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:16:03.217860  485208 request.go:629] Waited for 172.441694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.217949  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.217960  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.217983  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.218012  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.226864  485208 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0812 12:16:03.234278  485208 system_pods.go:59] 24 kube-system pods found
	I0812 12:16:03.234312  485208 system_pods.go:61] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:16:03.234319  485208 system_pods.go:61] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:16:03.234324  485208 system_pods.go:61] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:16:03.234330  485208 system_pods.go:61] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:16:03.234334  485208 system_pods.go:61] "etcd-ha-220134-m03" [7e4b8706-73e3-42d0-a278-af5746ec8b1c] Running
	I0812 12:16:03.234338  485208 system_pods.go:61] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:16:03.234343  485208 system_pods.go:61] "kindnet-5rpgt" [31982666-9f03-4c8c-9af1-49b88de06452] Running
	I0812 12:16:03.234348  485208 system_pods.go:61] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:16:03.234352  485208 system_pods.go:61] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:16:03.234358  485208 system_pods.go:61] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:16:03.234362  485208 system_pods.go:61] "kube-apiserver-ha-220134-m03" [803dd422-e106-4e57-b70b-cef6cfb2f085] Running
	I0812 12:16:03.234367  485208 system_pods.go:61] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:16:03.234376  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:16:03.234382  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m03" [20cc5801-d513-46d3-84c1-635ef86e0cc6] Running
	I0812 12:16:03.234390  485208 system_pods.go:61] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:16:03.234396  485208 system_pods.go:61] "kube-proxy-frf96" [e7a33b21-d4a2-4099-8b0c-e602993fd716] Running
	I0812 12:16:03.234402  485208 system_pods.go:61] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:16:03.234408  485208 system_pods.go:61] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:16:03.234413  485208 system_pods.go:61] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:16:03.234421  485208 system_pods.go:61] "kube-scheduler-ha-220134-m03" [eb11cfca-d302-4c98-8d7c-ba0689b8f812] Running
	I0812 12:16:03.234427  485208 system_pods.go:61] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:16:03.234433  485208 system_pods.go:61] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:16:03.234439  485208 system_pods.go:61] "kube-vip-ha-220134-m03" [d4064203-c571-43ac-a0f4-8cb1082d3e05] Running
	I0812 12:16:03.234448  485208 system_pods.go:61] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:16:03.234458  485208 system_pods.go:74] duration metric: took 189.142008ms to wait for pod list to return data ...
	I0812 12:16:03.234471  485208 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:16:03.418556  485208 request.go:629] Waited for 183.983595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:16:03.418632  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:16:03.418637  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.418645  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.418651  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.423472  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:03.423614  485208 default_sa.go:45] found service account: "default"
	I0812 12:16:03.423636  485208 default_sa.go:55] duration metric: took 189.156291ms for default service account to be created ...
	I0812 12:16:03.423648  485208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:16:03.618148  485208 request.go:629] Waited for 194.414281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.618238  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.618243  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.618251  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.618256  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.627772  485208 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 12:16:03.635180  485208 system_pods.go:86] 24 kube-system pods found
	I0812 12:16:03.635218  485208 system_pods.go:89] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:16:03.635225  485208 system_pods.go:89] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:16:03.635229  485208 system_pods.go:89] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:16:03.635233  485208 system_pods.go:89] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:16:03.635237  485208 system_pods.go:89] "etcd-ha-220134-m03" [7e4b8706-73e3-42d0-a278-af5746ec8b1c] Running
	I0812 12:16:03.635241  485208 system_pods.go:89] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:16:03.635244  485208 system_pods.go:89] "kindnet-5rpgt" [31982666-9f03-4c8c-9af1-49b88de06452] Running
	I0812 12:16:03.635248  485208 system_pods.go:89] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:16:03.635252  485208 system_pods.go:89] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:16:03.635256  485208 system_pods.go:89] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:16:03.635260  485208 system_pods.go:89] "kube-apiserver-ha-220134-m03" [803dd422-e106-4e57-b70b-cef6cfb2f085] Running
	I0812 12:16:03.635263  485208 system_pods.go:89] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:16:03.635268  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:16:03.635272  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m03" [20cc5801-d513-46d3-84c1-635ef86e0cc6] Running
	I0812 12:16:03.635276  485208 system_pods.go:89] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:16:03.635279  485208 system_pods.go:89] "kube-proxy-frf96" [e7a33b21-d4a2-4099-8b0c-e602993fd716] Running
	I0812 12:16:03.635283  485208 system_pods.go:89] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:16:03.635286  485208 system_pods.go:89] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:16:03.635290  485208 system_pods.go:89] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:16:03.635293  485208 system_pods.go:89] "kube-scheduler-ha-220134-m03" [eb11cfca-d302-4c98-8d7c-ba0689b8f812] Running
	I0812 12:16:03.635296  485208 system_pods.go:89] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:16:03.635300  485208 system_pods.go:89] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:16:03.635303  485208 system_pods.go:89] "kube-vip-ha-220134-m03" [d4064203-c571-43ac-a0f4-8cb1082d3e05] Running
	I0812 12:16:03.635306  485208 system_pods.go:89] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:16:03.635314  485208 system_pods.go:126] duration metric: took 211.659957ms to wait for k8s-apps to be running ...
	I0812 12:16:03.635325  485208 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:16:03.635375  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:16:03.652340  485208 system_svc.go:56] duration metric: took 17.002405ms WaitForService to wait for kubelet
	I0812 12:16:03.652383  485208 kubeadm.go:582] duration metric: took 24.668994669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:16:03.652411  485208 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:16:03.817784  485208 request.go:629] Waited for 165.280343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes
	I0812 12:16:03.817924  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes
	I0812 12:16:03.817938  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.817947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.817951  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.821918  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.823150  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823177  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823191  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823196  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823201  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823206  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823211  485208 node_conditions.go:105] duration metric: took 170.794009ms to run NodePressure ...
	I0812 12:16:03.823232  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:16:03.823263  485208 start.go:255] writing updated cluster config ...
	I0812 12:16:03.823610  485208 ssh_runner.go:195] Run: rm -f paused
	I0812 12:16:03.881159  485208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 12:16:03.884497  485208 out.go:177] * Done! kubectl is now configured to use "ha-220134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.119088835Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-qh8vv,Uid:31a40d8d-51b3-476c-a261-e4958fa5001a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464965142567598,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T12:16:04.820168212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bca65bc5-3ba1-44be-8606-f8235cf9b3d0,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1723464762762259965,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T12:12:42.438460462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mtqtk,Uid:be769ca5-c3cd-4682-96f3-6244b5e1cadb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464762746806996,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T12:12:42.437750006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-t8pg7,Uid:219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1723464762744833843,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T12:12:42.430820831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&PodSandboxMetadata{Name:kube-proxy-zcgh8,Uid:a39c5f53-1764-43b6-a140-2fec3819210d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464746611853016,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-12T12:12:24.802512324Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&PodSandboxMetadata{Name:kindnet-mh4sv,Uid:cd619441-cf92-4026-98ef-0f50d4bfc470,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464745714120535,Labels:map[string]string{app: kindnet,controller-revision-hash: 7c6d997646,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T12:12:24.792401093Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-220134,Uid:d348dbaa84a96f978a599972e582878c,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464725884893540,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d348dbaa84a96f978a599972e582878c,kubernetes.io/config.seen: 2024-08-12T12:12:05.408085035Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-220134,Uid:e0925dae6628595ef369e55476b766bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464725881409652,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766b
f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.228:8443,kubernetes.io/config.hash: e0925dae6628595ef369e55476b766bf,kubernetes.io/config.seen: 2024-08-12T12:12:05.408083990Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&PodSandboxMetadata{Name:etcd-ha-220134,Uid:d48521a4f0ff7e835626ad8a41bcd761,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464725873818862,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.228:2379,kubernetes.io/config.hash: d48521a4f0ff7e835626ad8a41bcd761,kubernetes.io/config.seen: 2024-08-12T12:12:05.408082911Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-220134,Uid:a5a2fb7f75425c6aec875451722b8037,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464725856210850,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{kubernetes.io/config.hash: a5a2fb7f75425c6aec875451722b8037,kubernetes.io/config.seen: 2024-08-12T12:12:05.408081620Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-220134,Uid:8440dcd3de63dd3f0b314aca28c58e50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723464725851502692,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8440dcd3de63dd3f0b314aca28c58e50,kubernetes.io/config.seen: 2024-08-12T12:12:05.408027489Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=05749cf6-10cf-4523-80ef-d1d4b8ff72ab name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.119874863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4be2249-71c0-44bb-841d-aaa2739cf9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.119942054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4be2249-71c0-44bb-841d-aaa2739cf9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.120150438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4be2249-71c0-44bb-841d-aaa2739cf9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.159203722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43ce5569-cb63-46d0-b0c7-33935ddb8d04 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.159367530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43ce5569-cb63-46d0-b0c7-33935ddb8d04 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.161056025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f95637f-4cd6-47e2-a944-a0a1ea557223 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.161650145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465186161627821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f95637f-4cd6-47e2-a944-a0a1ea557223 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.162267232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d33a1b30-c415-4edf-b641-b9f8d25dd277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.162378052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d33a1b30-c415-4edf-b641-b9f8d25dd277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.162594747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d33a1b30-c415-4edf-b641-b9f8d25dd277 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.203219110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be51aee2-a2c9-4956-ad9b-629bcd49bdbf name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.203367409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be51aee2-a2c9-4956-ad9b-629bcd49bdbf name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.205006379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8d6865d-0aff-47e8-9ca1-13b0a2310926 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.205506648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465186205479735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8d6865d-0aff-47e8-9ca1-13b0a2310926 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.205942549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1729f29-2426-457a-a388-cdf82464e14a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.206021532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1729f29-2426-457a-a388-cdf82464e14a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.206255678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1729f29-2426-457a-a388-cdf82464e14a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.245335786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b94be116-41ee-469d-a7d8-abc4e2a44074 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.245409825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b94be116-41ee-469d-a7d8-abc4e2a44074 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.246648435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07a5f9e1-d85e-4261-8706-3529ad80e875 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.247476772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465186247446654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07a5f9e1-d85e-4261-8706-3529ad80e875 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.247938754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63e4e373-18ac-40dd-8893-b6e87dc448ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.247991097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63e4e373-18ac-40dd-8893-b6e87dc448ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:19:46 ha-220134 crio[680]: time="2024-08-12 12:19:46.248393202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63e4e373-18ac-40dd-8893-b6e87dc448ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd5e5f2f3e8c9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d0ae8920356aa       busybox-fc5497c4f-qh8vv
	58c1b0454a4f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   2c5c191b44764       coredns-7db6d8ff4d-t8pg7
	d772d606436a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   3a4517d1fb24c       storage-provisioner
	d6bc464a808be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   c1f343a193477       coredns-7db6d8ff4d-mtqtk
	ec1c98b0147f2       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    7 minutes ago       Running             kindnet-cni               0                   6bb5cf25bace5       kindnet-mh4sv
	43dd48710573d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   d3f2e966dc4ec       kube-proxy-zcgh8
	4c2431108a96b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   38b5e173b2a5b       kube-vip-ha-220134
	61f57a70138eb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   dfe26ae1cd457       kube-apiserver-ha-220134
	3b386f478bcd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   e773728876a09       etcd-ha-220134
	d80fece0b2b4c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   142675cc5defd       kube-controller-manager-ha-220134
	e302617a6e799       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   36c1552f9acff       kube-scheduler-ha-220134
	
	
	==> coredns [58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf] <==
	[INFO] 10.244.2.2:51341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00065331s
	[INFO] 10.244.2.2:60084 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012982474s
	[INFO] 10.244.1.2:47114 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001833451s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000675959s
	[INFO] 10.244.0.4:53598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121522s
	[INFO] 10.244.0.4:43198 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083609s
	[INFO] 10.244.2.2:44558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149393s
	[INFO] 10.244.2.2:54267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000289357s
	[INFO] 10.244.2.2:36401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192313s
	[INFO] 10.244.2.2:47805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012737375s
	[INFO] 10.244.2.2:52660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213917s
	[INFO] 10.244.2.2:56721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019118s
	[INFO] 10.244.1.2:46713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180271s
	[INFO] 10.244.1.2:45630 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117989s
	[INFO] 10.244.1.2:36911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001707s
	[INFO] 10.244.2.2:55073 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.2.2:37969 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010618s
	[INFO] 10.244.1.2:57685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225366s
	[INFO] 10.244.1.2:52755 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103176s
	[INFO] 10.244.0.4:52936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131913s
	[INFO] 10.244.0.4:57415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055098s
	[INFO] 10.244.2.2:48523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363461s
	[INFO] 10.244.1.2:41861 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150101s
	[INFO] 10.244.0.4:60137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147895s
	[INFO] 10.244.0.4:46681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070169s
	
	
	==> coredns [d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691] <==
	[INFO] 10.244.1.2:59335 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001407399s
	[INFO] 10.244.1.2:36634 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210279s
	[INFO] 10.244.1.2:55843 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126918s
	[INFO] 10.244.0.4:55735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122773s
	[INFO] 10.244.0.4:45449 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001732135s
	[INFO] 10.244.0.4:52443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019087s
	[INFO] 10.244.0.4:57191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001115s
	[INFO] 10.244.0.4:36774 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249129s
	[INFO] 10.244.0.4:36176 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018293s
	[INFO] 10.244.0.4:52138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073249s
	[INFO] 10.244.0.4:52765 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054999s
	[INFO] 10.244.2.2:35368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110859s
	[INFO] 10.244.2.2:55727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119256s
	[INFO] 10.244.1.2:45598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120462s
	[INFO] 10.244.1.2:57257 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000297797s
	[INFO] 10.244.0.4:48236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152091s
	[INFO] 10.244.0.4:40466 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098727s
	[INFO] 10.244.2.2:37067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001712s
	[INFO] 10.244.2.2:54242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014178s
	[INFO] 10.244.2.2:41816 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019482s
	[INFO] 10.244.1.2:42291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335455s
	[INFO] 10.244.1.2:33492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078001s
	[INFO] 10.244.1.2:52208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005886s
	[INFO] 10.244.0.4:55618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005463s
	[INFO] 10.244.0.4:59573 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079101s
	
	
	==> describe nodes <==
	Name:               ha-220134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:12:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:19:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-220134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b36c448dca9a4512802dabd6b631307b
	  System UUID:                b36c448d-ca9a-4512-802d-abd6b631307b
	  Boot ID:                    b1858840-6bc1-4ad6-872f-13825f26f2e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qh8vv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 coredns-7db6d8ff4d-mtqtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 coredns-7db6d8ff4d-t8pg7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 etcd-ha-220134                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-mh4sv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m22s
	  kube-system                 kube-apiserver-ha-220134             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-controller-manager-ha-220134    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-proxy-zcgh8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-scheduler-ha-220134             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-vip-ha-220134                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m19s  kube-proxy       
	  Normal  Starting                 7m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s  kubelet          Node ha-220134 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s  kubelet          Node ha-220134 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s  kubelet          Node ha-220134 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m22s  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal  NodeReady                7m4s   kubelet          Node ha-220134 status is now: NodeReady
	  Normal  RegisteredNode           5m10s  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal  RegisteredNode           3m53s  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	
	
	Name:               ha-220134-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:14:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:17:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-220134-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ab5f23e5e3d4308ad21378e16e05f36
	  System UUID:                5ab5f23e-5e3d-4308-ad21-378e16e05f36
	  Boot ID:                    8780b076-0f04-484a-8659-00b31b1b3882
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9hhl4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-ha-220134-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 kindnet-52flt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m27s
	  kube-system                 kube-apiserver-ha-220134-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-controller-manager-ha-220134-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-proxy-bs72f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-scheduler-ha-220134-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-vip-ha-220134-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-220134-m02 status is now: NodeNotReady
	
	
	Name:               ha-220134-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_15_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:19:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-220134-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ec5658a50d452880d7dcb7c738e134
	  System UUID:                d4ec5658-a50d-4528-80d7-dcb7c738e134
	  Boot ID:                    0c28ba62-fd1f-4822-8fc9-5eb9067b87cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-82gr9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-ha-220134-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-5rpgt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m11s
	  kube-system                 kube-apiserver-ha-220134-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ha-220134-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-frf96                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-scheduler-ha-220134-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-vip-ha-220134-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-220134-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	
	
	Name:               ha-220134-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:16:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:19:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-220134-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 faa5c8215a114c109397b8051f5bfb12
	  System UUID:                faa5c821-5a11-4c10-9397-b8051f5bfb12
	  Boot ID:                    c4c180b8-7edc-46ca-84c9-9555186bc2c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zcp4c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m3s
	  kube-system                 kube-proxy-s6pvf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-220134-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug12 12:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051001] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.613356] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.630122] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.287271] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057678] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.199862] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.121638] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281974] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.332937] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060566] kauditd_printk_skb: 130 callbacks suppressed
	[Aug12 12:12] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.913038] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.066004] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.086767] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.012156] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.877478] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 12:14] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99] <==
	{"level":"warn","ts":"2024-08-12T12:19:46.336609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.35509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.436369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.5274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.535906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.543934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.54792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.565674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.578391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.615462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.625935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.635733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.639052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.648788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.655428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.665479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.675059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.685502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.690391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.694179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.701016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.70802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.71692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.729258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:19:46.73632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:19:46 up 8 min,  0 users,  load average: 0.18, 0.25, 0.14
	Linux ha-220134 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa] <==
	I0812 12:19:12.007175       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:19:22.006016       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:19:22.006138       1 main.go:299] handling current node
	I0812 12:19:22.006167       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:19:22.006188       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:19:22.006411       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:19:22.006441       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:19:22.006506       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:19:22.006524       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:19:31.997834       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:19:31.997917       1 main.go:299] handling current node
	I0812 12:19:31.997943       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:19:31.997951       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:19:31.998136       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:19:31.998160       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:19:31.998260       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:19:31.998329       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:19:42.002881       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:19:42.003000       1 main.go:299] handling current node
	I0812 12:19:42.003030       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:19:42.003050       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:19:42.003200       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:19:42.003221       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:19:42.003407       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:19:42.003456       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4] <==
	I0812 12:12:11.213208       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0812 12:12:11.221113       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.228]
	I0812 12:12:11.230397       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:12:11.251937       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 12:12:11.294950       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 12:12:12.357598       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 12:12:12.384090       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0812 12:12:12.414815       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 12:12:24.757597       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0812 12:12:25.551999       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0812 12:16:09.943027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49948: use of closed network connection
	E0812 12:16:10.130561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49968: use of closed network connection
	E0812 12:16:10.342928       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49984: use of closed network connection
	E0812 12:16:10.573186       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49998: use of closed network connection
	E0812 12:16:10.768615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50068: use of closed network connection
	E0812 12:16:10.974785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50094: use of closed network connection
	E0812 12:16:11.172734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50098: use of closed network connection
	E0812 12:16:11.365638       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50118: use of closed network connection
	E0812 12:16:11.558080       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50136: use of closed network connection
	E0812 12:16:12.084482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50166: use of closed network connection
	E0812 12:16:12.266684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50178: use of closed network connection
	E0812 12:16:12.464018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50198: use of closed network connection
	E0812 12:16:12.654171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50212: use of closed network connection
	E0812 12:16:12.865763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50236: use of closed network connection
	W0812 12:17:41.239580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.228]
	
	
	==> kube-controller-manager [d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9] <==
	I0812 12:15:35.294189       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220134-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:15:39.667529       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m03"
	I0812 12:16:04.827718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.929208ms"
	I0812 12:16:04.876769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.984734ms"
	I0812 12:16:05.009630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.714804ms"
	I0812 12:16:05.244812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.340169ms"
	E0812 12:16:05.245013       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:16:05.245232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.766µs"
	I0812 12:16:05.257906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.748µs"
	I0812 12:16:05.278251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.901µs"
	I0812 12:16:05.359391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.964618ms"
	I0812 12:16:05.359493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.029µs"
	I0812 12:16:08.230464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.076642ms"
	I0812 12:16:08.230557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.729µs"
	I0812 12:16:09.315384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.540789ms"
	I0812 12:16:09.315719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.661µs"
	I0812 12:16:09.457388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.119094ms"
	I0812 12:16:09.457533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.667µs"
	I0812 12:16:43.900847       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-220134-m04\" does not exist"
	I0812 12:16:43.937726       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220134-m04" podCIDRs=["10.244.3.0/24"]
	I0812 12:16:44.679091       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m04"
	I0812 12:17:05.169644       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220134-m04"
	I0812 12:18:03.476805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220134-m04"
	I0812 12:18:03.523864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.151557ms"
	I0812 12:18:03.524084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.67µs"
	
	
	==> kube-proxy [43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e] <==
	I0812 12:12:26.916066       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:12:26.937082       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	I0812 12:12:26.985828       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:12:26.985927       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:12:26.986005       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:12:26.989628       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:12:26.989976       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:12:26.990035       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:12:26.992043       1 config.go:192] "Starting service config controller"
	I0812 12:12:26.992418       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:12:26.992483       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:12:26.992502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:12:26.993790       1 config.go:319] "Starting node config controller"
	I0812 12:12:26.993840       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:12:27.092914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:12:27.093103       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:12:27.094604       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768] <==
	W0812 12:12:10.559188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:12:10.559311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:12:10.568739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 12:12:10.568844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 12:12:10.576109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 12:12:10.576370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:12:10.597752       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 12:12:10.597845       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 12:12:10.625590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:12:10.625685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:12:10.663130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:12:10.663175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0812 12:12:13.072576       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 12:16:43.986514       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zcp4c\": pod kindnet-zcp4c is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zcp4c" node="ha-220134-m04"
	E0812 12:16:43.988978       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 829781ac-6d1e-4b05-8980-64006094f191(kube-system/kindnet-zcp4c) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zcp4c"
	E0812 12:16:43.989401       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zcp4c\": pod kindnet-zcp4c is already assigned to node \"ha-220134-m04\"" pod="kube-system/kindnet-zcp4c"
	I0812 12:16:43.989622       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zcp4c" node="ha-220134-m04"
	E0812 12:16:44.006124       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s6pvf\": pod kube-proxy-s6pvf is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s6pvf" node="ha-220134-m04"
	E0812 12:16:44.006923       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 107f24c1-a9a0-4eb3-99ce-a767ff974ea6(kube-system/kube-proxy-s6pvf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s6pvf"
	E0812 12:16:44.007014       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s6pvf\": pod kube-proxy-s6pvf is already assigned to node \"ha-220134-m04\"" pod="kube-system/kube-proxy-s6pvf"
	I0812 12:16:44.007090       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s6pvf" node="ha-220134-m04"
	E0812 12:16:44.022580       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-txxjp\": pod kube-proxy-txxjp is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-txxjp" node="ha-220134-m04"
	E0812 12:16:44.022798       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 27ac376e-f61e-4abe-9d7d-1201161d7d1f(kube-system/kube-proxy-txxjp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-txxjp"
	E0812 12:16:44.022882       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-txxjp\": pod kube-proxy-txxjp is already assigned to node \"ha-220134-m04\"" pod="kube-system/kube-proxy-txxjp"
	I0812 12:16:44.022990       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-txxjp" node="ha-220134-m04"
	
	
	==> kubelet <==
	Aug 12 12:15:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:15:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:16:04 ha-220134 kubelet[1373]: I0812 12:16:04.820423    1373 topology_manager.go:215] "Topology Admit Handler" podUID="31a40d8d-51b3-476c-a261-e4958fa5001a" podNamespace="default" podName="busybox-fc5497c4f-qh8vv"
	Aug 12 12:16:04 ha-220134 kubelet[1373]: I0812 12:16:04.937040    1373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmj9m\" (UniqueName: \"kubernetes.io/projected/31a40d8d-51b3-476c-a261-e4958fa5001a-kube-api-access-hmj9m\") pod \"busybox-fc5497c4f-qh8vv\" (UID: \"31a40d8d-51b3-476c-a261-e4958fa5001a\") " pod="default/busybox-fc5497c4f-qh8vv"
	Aug 12 12:16:12 ha-220134 kubelet[1373]: E0812 12:16:12.305781    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:16:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:16:12 ha-220134 kubelet[1373]: E0812 12:16:12.865172    1373 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:57490->127.0.0.1:44369: read tcp 127.0.0.1:57490->127.0.0.1:44369: read: connection reset by peer
	Aug 12 12:17:12 ha-220134 kubelet[1373]: E0812 12:17:12.310771    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:17:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:18:12 ha-220134 kubelet[1373]: E0812 12:18:12.305804    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:18:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:19:12 ha-220134 kubelet[1373]: E0812 12:19:12.309461    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:19:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220134 -n ha-220134
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (3.187712797s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:19:51.338263  490701 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:19:51.338511  490701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:51.338519  490701 out.go:304] Setting ErrFile to fd 2...
	I0812 12:19:51.338523  490701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:51.338700  490701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:19:51.338874  490701 out.go:298] Setting JSON to false
	I0812 12:19:51.338897  490701 mustload.go:65] Loading cluster: ha-220134
	I0812 12:19:51.338980  490701 notify.go:220] Checking for updates...
	I0812 12:19:51.339283  490701 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:19:51.339298  490701 status.go:255] checking status of ha-220134 ...
	I0812 12:19:51.339646  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.339696  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.355446  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0812 12:19:51.355944  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.356523  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.356555  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.356991  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.357218  490701 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:19:51.359181  490701 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:19:51.359202  490701 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:51.359593  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.359649  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.375657  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0812 12:19:51.376144  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.376666  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.376691  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.376990  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.377172  490701 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:19:51.380046  490701 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:51.380492  490701 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:51.380523  490701 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:51.380702  490701 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:51.381112  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.381188  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.396073  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I0812 12:19:51.396569  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.397095  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.397123  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.397451  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.397660  490701 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:19:51.397881  490701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:51.397901  490701 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:19:51.400372  490701 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:51.400841  490701 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:51.400869  490701 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:51.401034  490701 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:19:51.401244  490701 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:19:51.401426  490701 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:19:51.401606  490701 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:19:51.480913  490701 ssh_runner.go:195] Run: systemctl --version
	I0812 12:19:51.487215  490701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:51.503359  490701 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:51.503397  490701 api_server.go:166] Checking apiserver status ...
	I0812 12:19:51.503446  490701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:51.519004  490701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:19:51.530282  490701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:51.530348  490701 ssh_runner.go:195] Run: ls
	I0812 12:19:51.535808  490701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:51.540199  490701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:51.540222  490701 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:19:51.540232  490701 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:51.540248  490701 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:19:51.540583  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.540632  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.556790  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I0812 12:19:51.557253  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.557754  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.557779  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.558108  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.558322  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:19:51.559804  490701 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:19:51.559820  490701 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:51.560110  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.560142  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.575125  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0812 12:19:51.575616  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.576126  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.576151  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.576497  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.576695  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:19:51.579419  490701 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:51.579869  490701 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:51.579898  490701 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:51.580043  490701 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:51.580329  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:51.580394  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:51.595216  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
	I0812 12:19:51.595629  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:51.596100  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:51.596122  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:51.596399  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:51.596567  490701 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:19:51.596826  490701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:51.596848  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:19:51.599963  490701 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:51.600452  490701 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:51.600477  490701 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:51.600654  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:19:51.600841  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:19:51.600988  490701 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:19:51.601148  490701 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:19:54.121486  490701 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:19:54.121637  490701 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:19:54.121669  490701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:54.121676  490701 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:19:54.121699  490701 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:54.121706  490701 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:19:54.122027  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.122082  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.137527  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0812 12:19:54.137992  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.138436  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.138477  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.138824  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.139004  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:19:54.140573  490701 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:19:54.140588  490701 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:54.141010  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.141051  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.155961  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0812 12:19:54.156411  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.156875  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.156895  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.157227  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.157414  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:19:54.160656  490701 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:54.161122  490701 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:54.161149  490701 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:54.161306  490701 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:54.161754  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.161817  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.176600  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0812 12:19:54.177107  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.177589  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.177612  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.177907  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.178124  490701 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:19:54.178306  490701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:54.178328  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:19:54.181293  490701 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:54.181796  490701 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:54.181836  490701 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:54.181939  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:19:54.182110  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:19:54.182265  490701 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:19:54.182450  490701 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:19:54.268825  490701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:54.285076  490701 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:54.285139  490701 api_server.go:166] Checking apiserver status ...
	I0812 12:19:54.285186  490701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:54.299877  490701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:19:54.310328  490701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:54.310413  490701 ssh_runner.go:195] Run: ls
	I0812 12:19:54.315035  490701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:54.319536  490701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:54.319571  490701 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:19:54.319581  490701 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:54.319598  490701 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:19:54.319969  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.320025  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.336755  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I0812 12:19:54.337246  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.337728  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.337754  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.338094  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.338277  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:19:54.339860  490701 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:19:54.339879  490701 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:54.340236  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.340291  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.357889  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0812 12:19:54.358325  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.358877  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.358909  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.359317  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.359717  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:19:54.362411  490701 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:54.362899  490701 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:54.362927  490701 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:54.363094  490701 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:54.363409  490701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:54.363461  490701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:54.379151  490701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0812 12:19:54.379613  490701 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:54.380151  490701 main.go:141] libmachine: Using API Version  1
	I0812 12:19:54.380177  490701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:54.380476  490701 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:54.380676  490701 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:19:54.380909  490701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:54.380938  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:19:54.383666  490701 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:54.384207  490701 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:54.384229  490701 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:54.384441  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:19:54.384626  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:19:54.384782  490701 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:19:54.384906  490701 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:19:54.464966  490701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:54.481160  490701 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (2.55939982s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:19:55.042068  490801 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:19:55.042462  490801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:55.042480  490801 out.go:304] Setting ErrFile to fd 2...
	I0812 12:19:55.042487  490801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:55.043146  490801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:19:55.043609  490801 out.go:298] Setting JSON to false
	I0812 12:19:55.043713  490801 mustload.go:65] Loading cluster: ha-220134
	I0812 12:19:55.043800  490801 notify.go:220] Checking for updates...
	I0812 12:19:55.044372  490801 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:19:55.044393  490801 status.go:255] checking status of ha-220134 ...
	I0812 12:19:55.044852  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.044910  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.061429  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0812 12:19:55.062024  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.062641  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.062681  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.063147  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.063384  490801 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:19:55.065120  490801 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:19:55.065139  490801 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:55.065499  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.065550  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.081620  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0812 12:19:55.082002  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.082526  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.082559  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.082918  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.083145  490801 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:19:55.085853  490801 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:55.086222  490801 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:55.086261  490801 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:55.086330  490801 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:55.086652  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.086689  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.102425  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0812 12:19:55.102981  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.103547  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.103579  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.103933  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.104161  490801 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:19:55.104444  490801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:55.104479  490801 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:19:55.107599  490801 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:55.108148  490801 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:55.108175  490801 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:55.108413  490801 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:19:55.108619  490801 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:19:55.108778  490801 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:19:55.108960  490801 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:19:55.188849  490801 ssh_runner.go:195] Run: systemctl --version
	I0812 12:19:55.194956  490801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:55.211183  490801 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:55.211224  490801 api_server.go:166] Checking apiserver status ...
	I0812 12:19:55.211272  490801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:55.227399  490801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:19:55.238270  490801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:55.238329  490801 ssh_runner.go:195] Run: ls
	I0812 12:19:55.243123  490801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:55.247494  490801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:55.247523  490801 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:19:55.247534  490801 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:55.247552  490801 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:19:55.247883  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.247946  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.265396  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0812 12:19:55.265883  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.266329  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.266353  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.266670  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.266891  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:19:55.268344  490801 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:19:55.268362  490801 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:55.268687  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.268739  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.283645  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0812 12:19:55.284136  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.284672  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.284692  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.285035  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.285270  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:19:55.288121  490801 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:55.288538  490801 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:55.288554  490801 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:55.288762  490801 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:55.289206  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:55.289260  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:55.305376  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35963
	I0812 12:19:55.305913  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:55.306467  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:55.306497  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:55.306852  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:55.307109  490801 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:19:55.307354  490801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:55.307381  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:19:55.310670  490801 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:55.311106  490801 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:55.311134  490801 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:55.311295  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:19:55.311466  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:19:55.311644  490801 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:19:55.311785  490801 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:19:57.193467  490801 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:19:57.193576  490801 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:19:57.193595  490801 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:57.193602  490801 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:19:57.193618  490801 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:19:57.193638  490801 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:19:57.193991  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.194041  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.211674  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0812 12:19:57.212260  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.212865  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.212897  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.213307  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.213579  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:19:57.215753  490801 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:19:57.215772  490801 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:57.216109  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.216158  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.231958  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0812 12:19:57.232609  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.233234  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.233272  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.233644  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.233840  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:19:57.237289  490801 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:57.237790  490801 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:57.237818  490801 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:57.237982  490801 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:19:57.238412  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.238465  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.255826  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0812 12:19:57.256319  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.256888  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.256916  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.257302  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.257590  490801 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:19:57.257801  490801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:57.257823  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:19:57.261034  490801 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:57.261658  490801 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:19:57.261691  490801 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:19:57.261887  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:19:57.262107  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:19:57.262272  490801 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:19:57.262425  490801 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:19:57.346141  490801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:57.362059  490801 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:57.362098  490801 api_server.go:166] Checking apiserver status ...
	I0812 12:19:57.362141  490801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:57.376165  490801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:19:57.386641  490801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:57.386738  490801 ssh_runner.go:195] Run: ls
	I0812 12:19:57.391303  490801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:57.395991  490801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:57.396013  490801 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:19:57.396022  490801 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:57.396037  490801 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:19:57.396331  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.396372  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.412196  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0812 12:19:57.412783  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.413359  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.413386  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.413752  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.413993  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:19:57.415592  490801 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:19:57.415627  490801 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:57.415940  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.415988  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.432509  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0812 12:19:57.432948  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.433514  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.433537  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.433928  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.434199  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:19:57.437693  490801 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:57.438164  490801 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:57.438202  490801 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:57.438341  490801 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:19:57.438731  490801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:57.438794  490801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:57.455322  490801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0812 12:19:57.455812  490801 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:57.456290  490801 main.go:141] libmachine: Using API Version  1
	I0812 12:19:57.456312  490801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:57.456647  490801 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:57.456858  490801 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:19:57.457095  490801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:57.457124  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:19:57.460087  490801 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:57.460543  490801 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:19:57.460581  490801 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:19:57.460698  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:19:57.460883  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:19:57.461034  490801 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:19:57.461179  490801 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:19:57.537869  490801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:57.555072  490801 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (4.812919485s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:19:58.937016  490900 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:19:58.937162  490900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:58.937172  490900 out.go:304] Setting ErrFile to fd 2...
	I0812 12:19:58.937176  490900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:19:58.937359  490900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:19:58.937544  490900 out.go:298] Setting JSON to false
	I0812 12:19:58.937568  490900 mustload.go:65] Loading cluster: ha-220134
	I0812 12:19:58.937615  490900 notify.go:220] Checking for updates...
	I0812 12:19:58.938024  490900 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:19:58.938045  490900 status.go:255] checking status of ha-220134 ...
	I0812 12:19:58.938479  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:58.938571  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:58.955782  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37741
	I0812 12:19:58.956327  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:58.957069  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:58.957120  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:58.957578  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:58.957837  490900 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:19:58.959874  490900 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:19:58.959892  490900 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:58.960300  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:58.960356  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:58.977042  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0812 12:19:58.977632  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:58.978085  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:58.978104  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:58.978424  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:58.978621  490900 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:19:58.982014  490900 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:58.982465  490900 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:58.982497  490900 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:58.982648  490900 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:19:58.982971  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:58.983011  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:58.998903  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I0812 12:19:58.999358  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:58.999929  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:58.999968  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:59.000346  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:59.000559  490900 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:19:59.000823  490900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:59.000864  490900 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:19:59.004220  490900 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:59.004680  490900 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:19:59.004717  490900 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:19:59.004929  490900 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:19:59.005111  490900 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:19:59.005282  490900 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:19:59.005469  490900 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:19:59.085742  490900 ssh_runner.go:195] Run: systemctl --version
	I0812 12:19:59.091897  490900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:19:59.107164  490900 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:19:59.107203  490900 api_server.go:166] Checking apiserver status ...
	I0812 12:19:59.107238  490900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:19:59.121982  490900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:19:59.133561  490900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:19:59.133655  490900 ssh_runner.go:195] Run: ls
	I0812 12:19:59.138379  490900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:19:59.144841  490900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:19:59.144875  490900 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:19:59.144886  490900 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:19:59.144903  490900 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:19:59.145356  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:59.145410  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:59.162321  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0812 12:19:59.162764  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:59.163256  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:59.163279  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:59.163644  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:59.163879  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:19:59.165379  490900 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:19:59.165397  490900 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:59.165729  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:59.165784  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:59.182086  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0812 12:19:59.182614  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:59.183136  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:59.183165  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:59.183485  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:59.183700  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:19:59.186545  490900 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:59.187036  490900 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:59.187088  490900 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:59.187344  490900 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:19:59.187648  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:19:59.187683  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:19:59.202828  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0812 12:19:59.203266  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:19:59.203837  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:19:59.203868  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:19:59.204205  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:19:59.204434  490900 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:19:59.204645  490900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:19:59.204702  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:19:59.207937  490900 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:59.208426  490900 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:19:59.208484  490900 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:19:59.208691  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:19:59.208916  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:19:59.209171  490900 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:19:59.209327  490900 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:20:00.265433  490900 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:00.265520  490900 retry.go:31] will retry after 287.16189ms: dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:03.337493  490900 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:03.337608  490900 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:20:03.337661  490900 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:03.337671  490900 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:20:03.337702  490900 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:03.337710  490900 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:03.338054  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.338107  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.353659  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0812 12:20:03.354205  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.354711  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.354737  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.355073  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.355277  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:03.356747  490900 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:03.356766  490900 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:03.357074  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.357150  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.374144  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0812 12:20:03.374664  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.375177  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.375201  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.375485  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.375769  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:03.378767  490900 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:03.379235  490900 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:03.379268  490900 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:03.379418  490900 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:03.379873  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.379927  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.396346  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0812 12:20:03.396854  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.397398  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.397424  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.397807  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.397963  490900 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:03.398141  490900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:03.398160  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:03.401006  490900 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:03.401411  490900 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:03.401446  490900 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:03.401603  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:03.401808  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:03.401954  490900 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:03.402081  490900 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:03.485013  490900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:03.500067  490900 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:03.500100  490900 api_server.go:166] Checking apiserver status ...
	I0812 12:20:03.500135  490900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:03.515088  490900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:03.526270  490900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:03.526332  490900 ssh_runner.go:195] Run: ls
	I0812 12:20:03.533739  490900 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:03.538032  490900 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:03.538062  490900 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:03.538071  490900 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:03.538088  490900 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:03.538390  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.538427  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.554381  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0812 12:20:03.554905  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.555444  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.555511  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.555843  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.556034  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:03.557598  490900 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:03.557614  490900 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:03.557908  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.557948  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.573627  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0812 12:20:03.574116  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.574654  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.574676  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.575017  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.575259  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:03.578352  490900 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:03.578821  490900 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:03.578862  490900 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:03.579037  490900 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:03.579343  490900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:03.579402  490900 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:03.596860  490900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
	I0812 12:20:03.597396  490900 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:03.598061  490900 main.go:141] libmachine: Using API Version  1
	I0812 12:20:03.598089  490900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:03.598499  490900 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:03.598747  490900 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:03.598971  490900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:03.598998  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:03.602857  490900 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:03.603429  490900 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:03.603453  490900 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:03.603676  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:03.603945  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:03.604129  490900 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:03.604366  490900 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:03.685009  490900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:03.701186  490900 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (4.712108096s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:05.481273  491000 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:05.481683  491000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:05.481698  491000 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:05.481704  491000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:05.482197  491000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:05.482623  491000 out.go:298] Setting JSON to false
	I0812 12:20:05.482652  491000 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:05.482703  491000 notify.go:220] Checking for updates...
	I0812 12:20:05.483452  491000 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:05.483483  491000 status.go:255] checking status of ha-220134 ...
	I0812 12:20:05.483979  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.484031  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.500245  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0812 12:20:05.500754  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.501444  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.501471  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.501893  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.502127  491000 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:20:05.503904  491000 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:20:05.503924  491000 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:05.504215  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.504258  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.521388  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0812 12:20:05.521866  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.522369  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.522390  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.522919  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.523151  491000 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:20:05.526744  491000 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:05.527269  491000 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:05.527311  491000 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:05.527472  491000 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:05.527805  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.527846  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.545550  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0812 12:20:05.546016  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.546527  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.546555  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.546898  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.547154  491000 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:20:05.547377  491000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:05.547416  491000 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:20:05.551198  491000 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:05.551600  491000 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:05.551630  491000 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:05.551856  491000 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:20:05.552097  491000 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:20:05.552299  491000 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:20:05.552470  491000 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:20:05.633158  491000 ssh_runner.go:195] Run: systemctl --version
	I0812 12:20:05.640549  491000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:05.659855  491000 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:05.659889  491000 api_server.go:166] Checking apiserver status ...
	I0812 12:20:05.659941  491000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:05.675589  491000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:20:05.687915  491000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:05.687974  491000 ssh_runner.go:195] Run: ls
	I0812 12:20:05.692640  491000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:05.699060  491000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:05.699090  491000 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:20:05.699101  491000 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:05.699124  491000 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:20:05.699475  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.699528  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.715522  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40763
	I0812 12:20:05.715995  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.716452  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.716475  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.716822  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.717052  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:20:05.718731  491000 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:20:05.718768  491000 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:05.719061  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.719100  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.734923  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35665
	I0812 12:20:05.735422  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.735950  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.735981  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.736361  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.736689  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:20:05.739960  491000 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:05.740524  491000 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:05.740556  491000 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:05.740711  491000 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:05.741029  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:05.741067  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:05.756786  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0812 12:20:05.757268  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:05.757790  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:05.757816  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:05.758127  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:05.758422  491000 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:20:05.758666  491000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:05.758698  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:20:05.761884  491000 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:05.762420  491000 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:05.762442  491000 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:05.762665  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:20:05.762863  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:20:05.763030  491000 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:20:05.763183  491000 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:20:06.409363  491000 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:06.409441  491000 retry.go:31] will retry after 339.407835ms: dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:09.801388  491000 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:09.801517  491000 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:20:09.801545  491000 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:09.801556  491000 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:20:09.801589  491000 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:09.801602  491000 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:09.801934  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:09.801981  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:09.817397  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0812 12:20:09.817862  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:09.818364  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:09.818389  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:09.818723  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:09.818927  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:09.820545  491000 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:09.820564  491000 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:09.820892  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:09.820935  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:09.836073  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0812 12:20:09.836556  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:09.837057  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:09.837092  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:09.837432  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:09.837665  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:09.840427  491000 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:09.840821  491000 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:09.840842  491000 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:09.841108  491000 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:09.841414  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:09.841467  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:09.857474  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0812 12:20:09.857960  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:09.858478  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:09.858505  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:09.858858  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:09.859075  491000 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:09.859327  491000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:09.859352  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:09.862952  491000 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:09.863461  491000 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:09.863537  491000 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:09.863672  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:09.863902  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:09.864082  491000 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:09.864226  491000 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:09.944763  491000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:09.959539  491000 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:09.959572  491000 api_server.go:166] Checking apiserver status ...
	I0812 12:20:09.959608  491000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:09.973204  491000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:09.982828  491000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:09.982900  491000 ssh_runner.go:195] Run: ls
	I0812 12:20:09.987608  491000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:09.992239  491000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:09.992264  491000 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:09.992276  491000 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:09.992301  491000 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:09.992611  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:09.992687  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:10.008336  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I0812 12:20:10.008915  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:10.009402  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:10.009423  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:10.009775  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:10.010005  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:10.011605  491000 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:10.011634  491000 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:10.011942  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:10.011997  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:10.028540  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0812 12:20:10.029027  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:10.029588  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:10.029614  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:10.029945  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:10.030174  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:10.032677  491000 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:10.033261  491000 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:10.033296  491000 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:10.033580  491000 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:10.033874  491000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:10.033912  491000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:10.050287  491000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I0812 12:20:10.050725  491000 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:10.051240  491000 main.go:141] libmachine: Using API Version  1
	I0812 12:20:10.051269  491000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:10.051683  491000 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:10.051916  491000 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:10.052128  491000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:10.052157  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:10.054934  491000 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:10.055372  491000 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:10.055412  491000 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:10.055570  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:10.055758  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:10.055925  491000 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:10.056053  491000 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:10.132919  491000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:10.148435  491000 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (3.763091267s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:12.745190  491117 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:12.745517  491117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:12.745533  491117 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:12.745540  491117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:12.745752  491117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:12.745932  491117 out.go:298] Setting JSON to false
	I0812 12:20:12.745959  491117 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:12.746076  491117 notify.go:220] Checking for updates...
	I0812 12:20:12.746453  491117 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:12.746476  491117 status.go:255] checking status of ha-220134 ...
	I0812 12:20:12.746886  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:12.746956  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:12.768680  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0812 12:20:12.769215  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:12.769941  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:12.769967  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:12.770491  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:12.770757  491117 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:20:12.772663  491117 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:20:12.772684  491117 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:12.773058  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:12.773155  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:12.789731  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I0812 12:20:12.790176  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:12.790659  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:12.790700  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:12.791040  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:12.791273  491117 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:20:12.794288  491117 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:12.794834  491117 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:12.794868  491117 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:12.794988  491117 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:12.795380  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:12.795429  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:12.812119  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0812 12:20:12.812710  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:12.813402  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:12.813435  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:12.813783  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:12.814040  491117 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:20:12.814317  491117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:12.814353  491117 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:20:12.818409  491117 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:12.818984  491117 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:12.819020  491117 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:12.819228  491117 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:20:12.819447  491117 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:20:12.819688  491117 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:20:12.819841  491117 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:20:12.897263  491117 ssh_runner.go:195] Run: systemctl --version
	I0812 12:20:12.904050  491117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:12.920480  491117 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:12.920518  491117 api_server.go:166] Checking apiserver status ...
	I0812 12:20:12.920553  491117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:12.935372  491117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:20:12.946612  491117 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:12.946694  491117 ssh_runner.go:195] Run: ls
	I0812 12:20:12.951282  491117 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:12.955804  491117 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:12.955834  491117 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:20:12.955848  491117 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:12.955875  491117 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:20:12.956211  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:12.956260  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:12.972201  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0812 12:20:12.972733  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:12.973326  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:12.973358  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:12.973728  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:12.973969  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:20:12.975721  491117 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:20:12.975740  491117 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:12.976052  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:12.976091  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:12.994929  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0812 12:20:12.995403  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:12.995900  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:12.995927  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:12.996261  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:12.996511  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:20:13.000429  491117 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:13.000947  491117 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:13.000976  491117 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:13.001229  491117 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:13.001680  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:13.001734  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:13.018228  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0812 12:20:13.018714  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:13.019327  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:13.019350  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:13.019776  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:13.020067  491117 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:20:13.020383  491117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:13.020412  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:20:13.024209  491117 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:13.024736  491117 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:13.024761  491117 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:13.024964  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:20:13.025204  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:20:13.025386  491117 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:20:13.025564  491117 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:20:16.105413  491117 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:16.105526  491117 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:20:16.105549  491117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:16.105562  491117 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:20:16.105591  491117 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:16.105604  491117 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:16.105986  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.106037  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.121872  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0812 12:20:16.122367  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.122883  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.122909  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.123268  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.123501  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:16.125196  491117 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:16.125215  491117 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:16.125565  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.125623  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.140630  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42625
	I0812 12:20:16.141111  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.141556  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.141578  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.141915  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.142175  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:16.145282  491117 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:16.145805  491117 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:16.145831  491117 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:16.146028  491117 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:16.146393  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.146438  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.162146  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0812 12:20:16.162553  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.163102  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.163130  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.163508  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.163728  491117 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:16.163948  491117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:16.163970  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:16.166673  491117 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:16.167114  491117 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:16.167140  491117 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:16.167273  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:16.167441  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:16.167601  491117 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:16.167728  491117 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:16.249436  491117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:16.266013  491117 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:16.266062  491117 api_server.go:166] Checking apiserver status ...
	I0812 12:20:16.266109  491117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:16.282057  491117 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:16.293612  491117 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:16.293674  491117 ssh_runner.go:195] Run: ls
	I0812 12:20:16.298692  491117 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:16.305065  491117 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:16.305130  491117 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:16.305144  491117 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:16.305166  491117 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:16.305654  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.305724  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.322087  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0812 12:20:16.322635  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.323144  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.323171  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.323652  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.323925  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:16.325823  491117 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:16.325842  491117 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:16.326239  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.326292  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.342009  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0812 12:20:16.342491  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.343020  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.343038  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.343402  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.343640  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:16.346704  491117 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:16.347125  491117 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:16.347175  491117 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:16.347274  491117 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:16.347617  491117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:16.347661  491117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:16.364097  491117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0812 12:20:16.364595  491117 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:16.365073  491117 main.go:141] libmachine: Using API Version  1
	I0812 12:20:16.365119  491117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:16.365523  491117 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:16.365765  491117 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:16.365980  491117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:16.365998  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:16.368927  491117 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:16.369517  491117 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:16.369544  491117 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:16.369771  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:16.369988  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:16.370152  491117 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:16.370317  491117 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:16.448654  491117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:16.462052  491117 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (3.756476813s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:20.314584  491216 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:20.314709  491216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:20.314718  491216 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:20.314722  491216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:20.314906  491216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:20.315074  491216 out.go:298] Setting JSON to false
	I0812 12:20:20.315104  491216 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:20.315225  491216 notify.go:220] Checking for updates...
	I0812 12:20:20.315603  491216 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:20.315627  491216 status.go:255] checking status of ha-220134 ...
	I0812 12:20:20.316069  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.316115  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.334691  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0812 12:20:20.335243  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.335835  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.335856  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.336297  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.336624  491216 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:20:20.338427  491216 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:20:20.338445  491216 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:20.338748  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.338791  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.355986  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I0812 12:20:20.356521  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.357045  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.357068  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.357473  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.357702  491216 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:20:20.360909  491216 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:20.361379  491216 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:20.361407  491216 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:20.361572  491216 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:20.362017  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.362058  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.377550  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0812 12:20:20.378026  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.378672  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.378701  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.379109  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.379384  491216 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:20:20.379663  491216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:20.379698  491216 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:20:20.382605  491216 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:20.383088  491216 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:20.383124  491216 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:20.383259  491216 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:20:20.383441  491216 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:20:20.383615  491216 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:20:20.383736  491216 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:20:20.470419  491216 ssh_runner.go:195] Run: systemctl --version
	I0812 12:20:20.477821  491216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:20.494534  491216 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:20.494566  491216 api_server.go:166] Checking apiserver status ...
	I0812 12:20:20.494616  491216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:20.510351  491216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:20:20.523412  491216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:20.523483  491216 ssh_runner.go:195] Run: ls
	I0812 12:20:20.528421  491216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:20.534629  491216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:20.534659  491216 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:20:20.534671  491216 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:20.534693  491216 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:20:20.535065  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.535114  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.553284  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0812 12:20:20.553850  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.554374  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.554403  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.554796  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.555021  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:20:20.556860  491216 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:20:20.556879  491216 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:20.557256  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.557307  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.572930  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I0812 12:20:20.573363  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.573829  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.573854  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.574149  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.574372  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:20:20.576878  491216 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:20.577282  491216 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:20.577310  491216 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:20.577498  491216 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:20:20.577915  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:20.577967  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:20.593735  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46667
	I0812 12:20:20.594155  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:20.594676  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:20.594700  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:20.595079  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:20.595267  491216 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:20:20.595516  491216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:20.595552  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:20:20.598581  491216 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:20.599115  491216 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:20:20.599139  491216 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:20:20.599312  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:20:20.599530  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:20:20.599750  491216 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:20:20.599983  491216 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	W0812 12:20:23.657455  491216 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.215:22: connect: no route to host
	W0812 12:20:23.657583  491216 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0812 12:20:23.657610  491216 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:23.657630  491216 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:20:23.657651  491216 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	I0812 12:20:23.657663  491216 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:23.658029  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.658086  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.675116  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0812 12:20:23.675690  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.676249  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.676285  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.676679  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.676906  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:23.678629  491216 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:23.678645  491216 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:23.678972  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.679018  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.694295  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40643
	I0812 12:20:23.694900  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.695405  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.695431  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.695793  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.695986  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:23.699395  491216 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:23.699845  491216 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:23.699884  491216 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:23.700046  491216 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:23.700369  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.700410  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.715869  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0812 12:20:23.716367  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.716926  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.716968  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.717341  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.717525  491216 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:23.717732  491216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:23.717756  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:23.720787  491216 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:23.721392  491216 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:23.721433  491216 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:23.721647  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:23.721864  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:23.722039  491216 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:23.722234  491216 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:23.805542  491216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:23.823717  491216 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:23.823751  491216 api_server.go:166] Checking apiserver status ...
	I0812 12:20:23.823805  491216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:23.840230  491216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:23.852682  491216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:23.852751  491216 ssh_runner.go:195] Run: ls
	I0812 12:20:23.858036  491216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:23.863148  491216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:23.863178  491216 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:23.863188  491216 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:23.863204  491216 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:23.863638  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.863693  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.878978  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46461
	I0812 12:20:23.879405  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.879873  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.879904  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.880222  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.880455  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:23.882165  491216 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:23.882182  491216 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:23.882609  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.882683  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.898514  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0812 12:20:23.898967  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.899512  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.899533  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.899894  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.900098  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:23.903026  491216 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:23.903511  491216 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:23.903545  491216 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:23.903695  491216 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:23.904029  491216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:23.904069  491216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:23.920190  491216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0812 12:20:23.920744  491216 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:23.921255  491216 main.go:141] libmachine: Using API Version  1
	I0812 12:20:23.921279  491216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:23.921693  491216 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:23.921918  491216 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:23.922106  491216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:23.922127  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:23.924960  491216 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:23.925464  491216 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:23.925493  491216 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:23.925650  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:23.925843  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:23.926012  491216 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:23.926162  491216 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:24.005404  491216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:24.021326  491216 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 7 (632.652812ms)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:35.293335  491368 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:35.293465  491368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:35.293474  491368 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:35.293478  491368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:35.293641  491368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:35.293804  491368 out.go:298] Setting JSON to false
	I0812 12:20:35.293827  491368 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:35.293931  491368 notify.go:220] Checking for updates...
	I0812 12:20:35.294235  491368 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:35.294250  491368 status.go:255] checking status of ha-220134 ...
	I0812 12:20:35.294634  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.294690  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.310206  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0812 12:20:35.310704  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.311324  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.311353  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.311683  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.311908  491368 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:20:35.313931  491368 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:20:35.313954  491368 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:35.314303  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.314352  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.332580  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0812 12:20:35.333067  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.333595  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.333617  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.333944  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.334139  491368 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:20:35.337136  491368 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:35.337641  491368 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:35.337681  491368 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:35.337883  491368 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:35.338191  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.338237  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.354284  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0812 12:20:35.354724  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.355471  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.355504  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.355939  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.356317  491368 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:20:35.356729  491368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:35.356770  491368 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:20:35.359862  491368 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:35.360463  491368 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:35.360500  491368 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:35.360816  491368 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:20:35.361057  491368 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:20:35.361269  491368 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:20:35.361477  491368 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:20:35.441670  491368 ssh_runner.go:195] Run: systemctl --version
	I0812 12:20:35.448809  491368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:35.466896  491368 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:35.466928  491368 api_server.go:166] Checking apiserver status ...
	I0812 12:20:35.466963  491368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:35.482506  491368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:20:35.492741  491368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:35.492812  491368 ssh_runner.go:195] Run: ls
	I0812 12:20:35.497294  491368 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:35.501430  491368 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:35.501464  491368 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:20:35.501474  491368 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:35.501491  491368 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:20:35.501798  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.501835  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.517389  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0812 12:20:35.517986  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.518573  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.518595  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.518860  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.519080  491368 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:20:35.520828  491368 status.go:330] ha-220134-m02 host status = "Stopped" (err=<nil>)
	I0812 12:20:35.520844  491368 status.go:343] host is not running, skipping remaining checks
	I0812 12:20:35.520849  491368 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:35.520865  491368 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:35.521292  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.521354  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.538905  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0812 12:20:35.539396  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.539912  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.539935  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.540249  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.540480  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:35.542047  491368 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:35.542063  491368 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:35.542363  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.542396  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.557871  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I0812 12:20:35.558275  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.558727  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.558752  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.559065  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.559266  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:35.562271  491368 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:35.562676  491368 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:35.562728  491368 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:35.562880  491368 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:35.563225  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.563261  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.578633  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0812 12:20:35.579069  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.579546  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.579570  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.579886  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.580065  491368 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:35.580240  491368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:35.580260  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:35.582968  491368 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:35.583454  491368 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:35.583487  491368 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:35.583665  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:35.583850  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:35.584034  491368 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:35.584334  491368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:35.665901  491368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:35.681156  491368 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:35.681191  491368 api_server.go:166] Checking apiserver status ...
	I0812 12:20:35.681231  491368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:35.695227  491368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:35.705540  491368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:35.705602  491368 ssh_runner.go:195] Run: ls
	I0812 12:20:35.712436  491368 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:35.717686  491368 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:35.717716  491368 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:35.717726  491368 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:35.717743  491368 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:35.718146  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.718190  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.736070  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0812 12:20:35.736525  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.737075  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.737116  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.737479  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.737696  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:35.739236  491368 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:35.739258  491368 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:35.739582  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.739626  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.755146  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I0812 12:20:35.755892  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.756509  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.756533  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.756905  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.757204  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:35.760300  491368 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:35.760906  491368 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:35.760948  491368 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:35.761129  491368 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:35.761499  491368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:35.761544  491368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:35.777333  491368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0812 12:20:35.777787  491368 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:35.778301  491368 main.go:141] libmachine: Using API Version  1
	I0812 12:20:35.778334  491368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:35.778682  491368 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:35.778869  491368 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:35.779070  491368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:35.779092  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:35.782016  491368 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:35.782413  491368 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:35.782433  491368 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:35.782679  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:35.782917  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:35.783189  491368 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:35.783372  491368 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:35.860371  491368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:35.876590  491368 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 7 (631.147529ms)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-220134-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:42.098201  491472 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:42.098343  491472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:42.098354  491472 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:42.098361  491472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:42.098537  491472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:42.098709  491472 out.go:298] Setting JSON to false
	I0812 12:20:42.098735  491472 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:42.098855  491472 notify.go:220] Checking for updates...
	I0812 12:20:42.099285  491472 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:42.099312  491472 status.go:255] checking status of ha-220134 ...
	I0812 12:20:42.099835  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.099912  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.116322  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33679
	I0812 12:20:42.116793  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.117444  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.117469  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.117932  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.118162  491472 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:20:42.119832  491472 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:20:42.119848  491472 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:42.120169  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.120204  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.135239  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0812 12:20:42.135617  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.136053  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.136067  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.136366  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.136539  491472 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:20:42.139051  491472 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:42.139410  491472 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:42.139451  491472 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:42.139526  491472 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:20:42.139826  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.139865  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.155116  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0812 12:20:42.155509  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.155990  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.156010  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.156359  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.156608  491472 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:20:42.156841  491472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:42.156881  491472 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:20:42.159889  491472 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:42.160388  491472 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:20:42.160414  491472 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:20:42.160756  491472 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:20:42.160944  491472 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:20:42.161135  491472 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:20:42.161294  491472 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:20:42.244841  491472 ssh_runner.go:195] Run: systemctl --version
	I0812 12:20:42.251151  491472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:42.267163  491472 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:42.267202  491472 api_server.go:166] Checking apiserver status ...
	I0812 12:20:42.267247  491472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:42.283332  491472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	W0812 12:20:42.293714  491472 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:42.293788  491472 ssh_runner.go:195] Run: ls
	I0812 12:20:42.298909  491472 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:42.303392  491472 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:42.303428  491472 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:20:42.303439  491472 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:42.303456  491472 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:20:42.303779  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.303818  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.319485  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I0812 12:20:42.319953  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.320490  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.320516  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.320849  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.321035  491472 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:20:42.322706  491472 status.go:330] ha-220134-m02 host status = "Stopped" (err=<nil>)
	I0812 12:20:42.322724  491472 status.go:343] host is not running, skipping remaining checks
	I0812 12:20:42.322732  491472 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:42.322755  491472 status.go:255] checking status of ha-220134-m03 ...
	I0812 12:20:42.323047  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.323087  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.338989  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I0812 12:20:42.339478  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.340007  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.340032  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.340336  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.340563  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:42.342131  491472 status.go:330] ha-220134-m03 host status = "Running" (err=<nil>)
	I0812 12:20:42.342160  491472 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:42.342494  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.342536  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.358159  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
	I0812 12:20:42.358735  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.359274  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.359299  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.359641  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.359876  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:20:42.362981  491472 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:42.363481  491472 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:42.363506  491472 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:42.363679  491472 host.go:66] Checking if "ha-220134-m03" exists ...
	I0812 12:20:42.364031  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.364077  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.379745  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0812 12:20:42.380229  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.380767  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.380794  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.381184  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.381383  491472 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:42.381619  491472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:42.381646  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:42.384257  491472 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:42.384741  491472 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:42.384771  491472 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:42.384917  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:42.385128  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:42.385303  491472 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:42.385460  491472 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:42.468785  491472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:42.487716  491472 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:20:42.487764  491472 api_server.go:166] Checking apiserver status ...
	I0812 12:20:42.487814  491472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:20:42.503909  491472 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W0812 12:20:42.514999  491472 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:20:42.515076  491472 ssh_runner.go:195] Run: ls
	I0812 12:20:42.519759  491472 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:20:42.524116  491472 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:20:42.524144  491472 status.go:422] ha-220134-m03 apiserver status = Running (err=<nil>)
	I0812 12:20:42.524154  491472 status.go:257] ha-220134-m03 status: &{Name:ha-220134-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:20:42.524168  491472 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:20:42.524585  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.524627  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.541124  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45083
	I0812 12:20:42.541625  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.542134  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.542154  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.542442  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.542681  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:42.544138  491472 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:20:42.544159  491472 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:42.544577  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.544627  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.560220  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0812 12:20:42.560817  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.561469  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.561501  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.561861  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.562073  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:20:42.565070  491472 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:42.565674  491472 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:42.565702  491472 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:42.565883  491472 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:20:42.566195  491472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:42.566242  491472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:42.581912  491472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0812 12:20:42.582448  491472 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:42.583056  491472 main.go:141] libmachine: Using API Version  1
	I0812 12:20:42.583083  491472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:42.583535  491472 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:42.583787  491472 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:42.584083  491472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:20:42.584106  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:42.587310  491472 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:42.587845  491472 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:42.587873  491472 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:42.588037  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:42.588216  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:42.588338  491472 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:42.588437  491472 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:42.665043  491472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:20:42.681791  491472 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220134 -n ha-220134
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220134 logs -n 25: (1.569312569s)
E0812 12:20:44.615718  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m03_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m04 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp testdata/cp-test.txt                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m04_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03:/home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m03 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-220134 node stop m02 -v=7                                                    | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-220134 node start m02 -v=7                                                   | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:11:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:11:33.186100  485208 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:11:33.186217  485208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:33.186226  485208 out.go:304] Setting ErrFile to fd 2...
	I0812 12:11:33.186230  485208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:33.186423  485208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:11:33.187021  485208 out.go:298] Setting JSON to false
	I0812 12:11:33.188089  485208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14024,"bootTime":1723450669,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:11:33.188149  485208 start.go:139] virtualization: kvm guest
	I0812 12:11:33.190527  485208 out.go:177] * [ha-220134] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:11:33.192169  485208 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:11:33.192185  485208 notify.go:220] Checking for updates...
	I0812 12:11:33.195024  485208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:11:33.196400  485208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:11:33.198120  485208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.199635  485208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:11:33.201070  485208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:11:33.202724  485208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:11:33.239881  485208 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 12:11:33.241290  485208 start.go:297] selected driver: kvm2
	I0812 12:11:33.241314  485208 start.go:901] validating driver "kvm2" against <nil>
	I0812 12:11:33.241327  485208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:11:33.242088  485208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:11:33.242171  485208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:11:33.258266  485208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:11:33.258321  485208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 12:11:33.258544  485208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:11:33.258612  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:11:33.258621  485208 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0812 12:11:33.258631  485208 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 12:11:33.258691  485208 start.go:340] cluster config:
	{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0812 12:11:33.258822  485208 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:11:33.261782  485208 out.go:177] * Starting "ha-220134" primary control-plane node in "ha-220134" cluster
	I0812 12:11:33.263232  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:11:33.263278  485208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:11:33.263289  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:11:33.263400  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:11:33.263419  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:11:33.263759  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:11:33.263784  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json: {Name:mk32ee8146005faf70784d964d2eaca91fba2ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:11:33.263936  485208 start.go:360] acquireMachinesLock for ha-220134: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:11:33.263965  485208 start.go:364] duration metric: took 15.961µs to acquireMachinesLock for "ha-220134"
	I0812 12:11:33.263982  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:11:33.264051  485208 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 12:11:33.265763  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:11:33.265937  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:33.265990  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:33.280982  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0812 12:11:33.281491  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:33.282123  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:11:33.282145  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:33.282557  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:33.282783  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:33.282962  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:33.283144  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:11:33.283174  485208 client.go:168] LocalClient.Create starting
	I0812 12:11:33.283224  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:11:33.283274  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:11:33.283299  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:11:33.283394  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:11:33.283423  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:11:33.283442  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:11:33.283467  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:11:33.283486  485208 main.go:141] libmachine: (ha-220134) Calling .PreCreateCheck
	I0812 12:11:33.283834  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:33.284239  485208 main.go:141] libmachine: Creating machine...
	I0812 12:11:33.284255  485208 main.go:141] libmachine: (ha-220134) Calling .Create
	I0812 12:11:33.284390  485208 main.go:141] libmachine: (ha-220134) Creating KVM machine...
	I0812 12:11:33.285498  485208 main.go:141] libmachine: (ha-220134) DBG | found existing default KVM network
	I0812 12:11:33.286220  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.286052  485231 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0812 12:11:33.286246  485208 main.go:141] libmachine: (ha-220134) DBG | created network xml: 
	I0812 12:11:33.286262  485208 main.go:141] libmachine: (ha-220134) DBG | <network>
	I0812 12:11:33.286272  485208 main.go:141] libmachine: (ha-220134) DBG |   <name>mk-ha-220134</name>
	I0812 12:11:33.286302  485208 main.go:141] libmachine: (ha-220134) DBG |   <dns enable='no'/>
	I0812 12:11:33.286327  485208 main.go:141] libmachine: (ha-220134) DBG |   
	I0812 12:11:33.286340  485208 main.go:141] libmachine: (ha-220134) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0812 12:11:33.286349  485208 main.go:141] libmachine: (ha-220134) DBG |     <dhcp>
	I0812 12:11:33.286368  485208 main.go:141] libmachine: (ha-220134) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0812 12:11:33.286383  485208 main.go:141] libmachine: (ha-220134) DBG |     </dhcp>
	I0812 12:11:33.286396  485208 main.go:141] libmachine: (ha-220134) DBG |   </ip>
	I0812 12:11:33.286406  485208 main.go:141] libmachine: (ha-220134) DBG |   
	I0812 12:11:33.286440  485208 main.go:141] libmachine: (ha-220134) DBG | </network>
	I0812 12:11:33.286466  485208 main.go:141] libmachine: (ha-220134) DBG | 
	I0812 12:11:33.291860  485208 main.go:141] libmachine: (ha-220134) DBG | trying to create private KVM network mk-ha-220134 192.168.39.0/24...
	I0812 12:11:33.360018  485208 main.go:141] libmachine: (ha-220134) DBG | private KVM network mk-ha-220134 192.168.39.0/24 created
	I0812 12:11:33.360053  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.360000  485231 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.360066  485208 main.go:141] libmachine: (ha-220134) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 ...
	I0812 12:11:33.360082  485208 main.go:141] libmachine: (ha-220134) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:11:33.360215  485208 main.go:141] libmachine: (ha-220134) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:11:33.640396  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.640225  485231 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa...
	I0812 12:11:33.752867  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.752706  485231 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/ha-220134.rawdisk...
	I0812 12:11:33.752897  485208 main.go:141] libmachine: (ha-220134) DBG | Writing magic tar header
	I0812 12:11:33.752910  485208 main.go:141] libmachine: (ha-220134) DBG | Writing SSH key tar header
	I0812 12:11:33.752921  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:33.752830  485231 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 ...
	I0812 12:11:33.752932  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134
	I0812 12:11:33.752942  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:11:33.752952  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:33.752978  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:11:33.752991  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:11:33.752999  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:11:33.753009  485208 main.go:141] libmachine: (ha-220134) DBG | Checking permissions on dir: /home
	I0812 12:11:33.753021  485208 main.go:141] libmachine: (ha-220134) DBG | Skipping /home - not owner
	I0812 12:11:33.753052  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134 (perms=drwx------)
	I0812 12:11:33.753074  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:11:33.753104  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:11:33.753119  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:11:33.753133  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:11:33.753147  485208 main.go:141] libmachine: (ha-220134) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:11:33.753161  485208 main.go:141] libmachine: (ha-220134) Creating domain...
	I0812 12:11:33.754324  485208 main.go:141] libmachine: (ha-220134) define libvirt domain using xml: 
	I0812 12:11:33.754353  485208 main.go:141] libmachine: (ha-220134) <domain type='kvm'>
	I0812 12:11:33.754364  485208 main.go:141] libmachine: (ha-220134)   <name>ha-220134</name>
	I0812 12:11:33.754375  485208 main.go:141] libmachine: (ha-220134)   <memory unit='MiB'>2200</memory>
	I0812 12:11:33.754381  485208 main.go:141] libmachine: (ha-220134)   <vcpu>2</vcpu>
	I0812 12:11:33.754386  485208 main.go:141] libmachine: (ha-220134)   <features>
	I0812 12:11:33.754391  485208 main.go:141] libmachine: (ha-220134)     <acpi/>
	I0812 12:11:33.754396  485208 main.go:141] libmachine: (ha-220134)     <apic/>
	I0812 12:11:33.754401  485208 main.go:141] libmachine: (ha-220134)     <pae/>
	I0812 12:11:33.754416  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754423  485208 main.go:141] libmachine: (ha-220134)   </features>
	I0812 12:11:33.754428  485208 main.go:141] libmachine: (ha-220134)   <cpu mode='host-passthrough'>
	I0812 12:11:33.754434  485208 main.go:141] libmachine: (ha-220134)   
	I0812 12:11:33.754438  485208 main.go:141] libmachine: (ha-220134)   </cpu>
	I0812 12:11:33.754444  485208 main.go:141] libmachine: (ha-220134)   <os>
	I0812 12:11:33.754449  485208 main.go:141] libmachine: (ha-220134)     <type>hvm</type>
	I0812 12:11:33.754488  485208 main.go:141] libmachine: (ha-220134)     <boot dev='cdrom'/>
	I0812 12:11:33.754520  485208 main.go:141] libmachine: (ha-220134)     <boot dev='hd'/>
	I0812 12:11:33.754536  485208 main.go:141] libmachine: (ha-220134)     <bootmenu enable='no'/>
	I0812 12:11:33.754548  485208 main.go:141] libmachine: (ha-220134)   </os>
	I0812 12:11:33.754561  485208 main.go:141] libmachine: (ha-220134)   <devices>
	I0812 12:11:33.754577  485208 main.go:141] libmachine: (ha-220134)     <disk type='file' device='cdrom'>
	I0812 12:11:33.754593  485208 main.go:141] libmachine: (ha-220134)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/boot2docker.iso'/>
	I0812 12:11:33.754609  485208 main.go:141] libmachine: (ha-220134)       <target dev='hdc' bus='scsi'/>
	I0812 12:11:33.754623  485208 main.go:141] libmachine: (ha-220134)       <readonly/>
	I0812 12:11:33.754635  485208 main.go:141] libmachine: (ha-220134)     </disk>
	I0812 12:11:33.754649  485208 main.go:141] libmachine: (ha-220134)     <disk type='file' device='disk'>
	I0812 12:11:33.754663  485208 main.go:141] libmachine: (ha-220134)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:11:33.754677  485208 main.go:141] libmachine: (ha-220134)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/ha-220134.rawdisk'/>
	I0812 12:11:33.754701  485208 main.go:141] libmachine: (ha-220134)       <target dev='hda' bus='virtio'/>
	I0812 12:11:33.754714  485208 main.go:141] libmachine: (ha-220134)     </disk>
	I0812 12:11:33.754727  485208 main.go:141] libmachine: (ha-220134)     <interface type='network'>
	I0812 12:11:33.754741  485208 main.go:141] libmachine: (ha-220134)       <source network='mk-ha-220134'/>
	I0812 12:11:33.754753  485208 main.go:141] libmachine: (ha-220134)       <model type='virtio'/>
	I0812 12:11:33.754765  485208 main.go:141] libmachine: (ha-220134)     </interface>
	I0812 12:11:33.754778  485208 main.go:141] libmachine: (ha-220134)     <interface type='network'>
	I0812 12:11:33.754794  485208 main.go:141] libmachine: (ha-220134)       <source network='default'/>
	I0812 12:11:33.754807  485208 main.go:141] libmachine: (ha-220134)       <model type='virtio'/>
	I0812 12:11:33.754818  485208 main.go:141] libmachine: (ha-220134)     </interface>
	I0812 12:11:33.754830  485208 main.go:141] libmachine: (ha-220134)     <serial type='pty'>
	I0812 12:11:33.754862  485208 main.go:141] libmachine: (ha-220134)       <target port='0'/>
	I0812 12:11:33.754878  485208 main.go:141] libmachine: (ha-220134)     </serial>
	I0812 12:11:33.754893  485208 main.go:141] libmachine: (ha-220134)     <console type='pty'>
	I0812 12:11:33.754906  485208 main.go:141] libmachine: (ha-220134)       <target type='serial' port='0'/>
	I0812 12:11:33.754923  485208 main.go:141] libmachine: (ha-220134)     </console>
	I0812 12:11:33.754936  485208 main.go:141] libmachine: (ha-220134)     <rng model='virtio'>
	I0812 12:11:33.754948  485208 main.go:141] libmachine: (ha-220134)       <backend model='random'>/dev/random</backend>
	I0812 12:11:33.754961  485208 main.go:141] libmachine: (ha-220134)     </rng>
	I0812 12:11:33.754970  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754985  485208 main.go:141] libmachine: (ha-220134)     
	I0812 12:11:33.754997  485208 main.go:141] libmachine: (ha-220134)   </devices>
	I0812 12:11:33.755002  485208 main.go:141] libmachine: (ha-220134) </domain>
	I0812 12:11:33.755012  485208 main.go:141] libmachine: (ha-220134) 
	I0812 12:11:33.759352  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:03:67:f1 in network default
	I0812 12:11:33.760110  485208 main.go:141] libmachine: (ha-220134) Ensuring networks are active...
	I0812 12:11:33.760131  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:33.760878  485208 main.go:141] libmachine: (ha-220134) Ensuring network default is active
	I0812 12:11:33.761266  485208 main.go:141] libmachine: (ha-220134) Ensuring network mk-ha-220134 is active
	I0812 12:11:33.761880  485208 main.go:141] libmachine: (ha-220134) Getting domain xml...
	I0812 12:11:33.762678  485208 main.go:141] libmachine: (ha-220134) Creating domain...
	I0812 12:11:34.975900  485208 main.go:141] libmachine: (ha-220134) Waiting to get IP...
	I0812 12:11:34.976768  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:34.977206  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:34.977230  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:34.977181  485231 retry.go:31] will retry after 288.895038ms: waiting for machine to come up
	I0812 12:11:35.267763  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:35.268298  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:35.268326  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:35.268241  485231 retry.go:31] will retry after 387.612987ms: waiting for machine to come up
	I0812 12:11:35.657979  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:35.658474  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:35.658501  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:35.658431  485231 retry.go:31] will retry after 449.177651ms: waiting for machine to come up
	I0812 12:11:36.109210  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:36.109686  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:36.109711  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:36.109613  485231 retry.go:31] will retry after 395.683299ms: waiting for machine to come up
	I0812 12:11:36.507341  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:36.507826  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:36.507856  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:36.507771  485231 retry.go:31] will retry after 725.500863ms: waiting for machine to come up
	I0812 12:11:37.235267  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:37.235665  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:37.235694  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:37.235627  485231 retry.go:31] will retry after 798.697333ms: waiting for machine to come up
	I0812 12:11:38.035576  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:38.036019  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:38.036062  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:38.035946  485231 retry.go:31] will retry after 872.844105ms: waiting for machine to come up
	I0812 12:11:38.910826  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:38.911218  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:38.911249  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:38.911175  485231 retry.go:31] will retry after 985.561572ms: waiting for machine to come up
	I0812 12:11:39.899617  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:39.900083  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:39.900108  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:39.900021  485231 retry.go:31] will retry after 1.598872532s: waiting for machine to come up
	I0812 12:11:41.500937  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:41.501445  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:41.501476  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:41.501385  485231 retry.go:31] will retry after 2.324192549s: waiting for machine to come up
	I0812 12:11:43.826795  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:43.827291  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:43.827323  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:43.827230  485231 retry.go:31] will retry after 2.849217598s: waiting for machine to come up
	I0812 12:11:46.680256  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:46.680620  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:46.680645  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:46.680593  485231 retry.go:31] will retry after 3.064622363s: waiting for machine to come up
	I0812 12:11:49.747477  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:49.747946  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find current IP address of domain ha-220134 in network mk-ha-220134
	I0812 12:11:49.747971  485208 main.go:141] libmachine: (ha-220134) DBG | I0812 12:11:49.747895  485231 retry.go:31] will retry after 3.790371548s: waiting for machine to come up
	I0812 12:11:53.539642  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.539997  485208 main.go:141] libmachine: (ha-220134) Found IP for machine: 192.168.39.228
	I0812 12:11:53.540031  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has current primary IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.540040  485208 main.go:141] libmachine: (ha-220134) Reserving static IP address...
	I0812 12:11:53.540360  485208 main.go:141] libmachine: (ha-220134) DBG | unable to find host DHCP lease matching {name: "ha-220134", mac: "52:54:00:91:2e:31", ip: "192.168.39.228"} in network mk-ha-220134
	I0812 12:11:53.617206  485208 main.go:141] libmachine: (ha-220134) DBG | Getting to WaitForSSH function...
	I0812 12:11:53.617243  485208 main.go:141] libmachine: (ha-220134) Reserved static IP address: 192.168.39.228
	I0812 12:11:53.617258  485208 main.go:141] libmachine: (ha-220134) Waiting for SSH to be available...
	I0812 12:11:53.619839  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.620303  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.620336  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.620396  485208 main.go:141] libmachine: (ha-220134) DBG | Using SSH client type: external
	I0812 12:11:53.620419  485208 main.go:141] libmachine: (ha-220134) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa (-rw-------)
	I0812 12:11:53.620445  485208 main.go:141] libmachine: (ha-220134) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:11:53.620463  485208 main.go:141] libmachine: (ha-220134) DBG | About to run SSH command:
	I0812 12:11:53.620474  485208 main.go:141] libmachine: (ha-220134) DBG | exit 0
	I0812 12:11:53.741422  485208 main.go:141] libmachine: (ha-220134) DBG | SSH cmd err, output: <nil>: 
	I0812 12:11:53.741716  485208 main.go:141] libmachine: (ha-220134) KVM machine creation complete!
	I0812 12:11:53.742080  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:53.742714  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:53.742909  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:53.743101  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:11:53.743118  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:11:53.744621  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:11:53.744636  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:11:53.744641  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:11:53.744647  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.746912  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.747241  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.747267  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.747414  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.747607  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.747745  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.747869  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.748222  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.748450  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.748468  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:11:53.848653  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:11:53.848674  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:11:53.848682  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.851655  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.852060  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.852091  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.852272  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.852505  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.852677  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.852860  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.853067  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.853298  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.853312  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:11:53.954357  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:11:53.954469  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:11:53.954480  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:11:53.954489  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:53.954863  485208 buildroot.go:166] provisioning hostname "ha-220134"
	I0812 12:11:53.954897  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:53.955102  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:53.957563  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.957924  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:53.957956  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:53.958082  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:53.958292  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.958468  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:53.958612  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:53.958777  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:53.958968  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:53.958982  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134 && echo "ha-220134" | sudo tee /etc/hostname
	I0812 12:11:54.072834  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:11:54.072867  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.076065  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.076467  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.076503  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.076665  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.076919  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.077072  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.077278  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.077483  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.077714  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.077741  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:11:54.186128  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:11:54.186164  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:11:54.186228  485208 buildroot.go:174] setting up certificates
	I0812 12:11:54.186239  485208 provision.go:84] configureAuth start
	I0812 12:11:54.186252  485208 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:11:54.186574  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:54.189163  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.189491  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.189533  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.189599  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.191953  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.192339  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.192365  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.192507  485208 provision.go:143] copyHostCerts
	I0812 12:11:54.192544  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:11:54.192612  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:11:54.192623  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:11:54.192717  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:11:54.192870  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:11:54.192904  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:11:54.192915  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:11:54.192957  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:11:54.193021  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:11:54.193046  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:11:54.193055  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:11:54.193100  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:11:54.193166  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134 san=[127.0.0.1 192.168.39.228 ha-220134 localhost minikube]
	I0812 12:11:54.372749  485208 provision.go:177] copyRemoteCerts
	I0812 12:11:54.372827  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:11:54.372857  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.375849  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.376400  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.376425  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.376748  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.377033  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.377293  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.377482  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:54.460265  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:11:54.460342  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:11:54.485745  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:11:54.485834  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0812 12:11:54.510499  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:11:54.510602  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 12:11:54.535012  485208 provision.go:87] duration metric: took 348.757151ms to configureAuth
	I0812 12:11:54.535041  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:11:54.535266  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:11:54.535398  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.538016  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.538399  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.538426  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.538633  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.538838  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.539025  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.539154  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.539302  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.539611  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.539636  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:11:54.817462  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:11:54.817498  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:11:54.817523  485208 main.go:141] libmachine: (ha-220134) Calling .GetURL
	I0812 12:11:54.819130  485208 main.go:141] libmachine: (ha-220134) DBG | Using libvirt version 6000000
	I0812 12:11:54.821645  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.821997  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.822034  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.822192  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:11:54.822212  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:11:54.822222  485208 client.go:171] duration metric: took 21.539035903s to LocalClient.Create
	I0812 12:11:54.822258  485208 start.go:167] duration metric: took 21.539114148s to libmachine.API.Create "ha-220134"
	I0812 12:11:54.822272  485208 start.go:293] postStartSetup for "ha-220134" (driver="kvm2")
	I0812 12:11:54.822287  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:11:54.822312  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:54.822652  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:11:54.822679  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.825308  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.825675  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.825703  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.825845  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.826086  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.826291  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.826425  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:54.908273  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:11:54.912764  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:11:54.912801  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:11:54.912880  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:11:54.913006  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:11:54.913021  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:11:54.913207  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:11:54.922687  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:11:54.947648  485208 start.go:296] duration metric: took 125.360245ms for postStartSetup
	I0812 12:11:54.947706  485208 main.go:141] libmachine: (ha-220134) Calling .GetConfigRaw
	I0812 12:11:54.948799  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:54.952002  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.952329  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.952361  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.952580  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:11:54.952827  485208 start.go:128] duration metric: took 21.688764926s to createHost
	I0812 12:11:54.952857  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:54.954861  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.955171  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:54.955192  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:54.955351  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:54.955545  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.955722  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:54.955864  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:54.956022  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:11:54.956186  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:11:54.956197  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:11:55.054036  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464715.025192423
	
	I0812 12:11:55.054074  485208 fix.go:216] guest clock: 1723464715.025192423
	I0812 12:11:55.054083  485208 fix.go:229] Guest: 2024-08-12 12:11:55.025192423 +0000 UTC Remote: 2024-08-12 12:11:54.952841314 +0000 UTC m=+21.803416181 (delta=72.351109ms)
	I0812 12:11:55.054107  485208 fix.go:200] guest clock delta is within tolerance: 72.351109ms
	I0812 12:11:55.054112  485208 start.go:83] releasing machines lock for "ha-220134", held for 21.790139043s
	I0812 12:11:55.054136  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.054485  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:55.057190  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.057503  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.057531  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.057677  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058144  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058320  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:11:55.058422  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:11:55.058478  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:55.058583  485208 ssh_runner.go:195] Run: cat /version.json
	I0812 12:11:55.058607  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:11:55.061184  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061361  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061577  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.061610  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061762  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:55.061764  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:55.061790  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:55.061970  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:55.062042  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:11:55.062125  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:55.062243  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:11:55.062258  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:55.062378  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:11:55.062533  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:11:55.155726  485208 ssh_runner.go:195] Run: systemctl --version
	I0812 12:11:55.161772  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:11:55.322700  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:11:55.328524  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:11:55.328599  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:11:55.344607  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:11:55.344642  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:11:55.344710  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:11:55.361606  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:11:55.375767  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:11:55.375839  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:11:55.390879  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:11:55.405785  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:11:55.524336  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:11:55.686262  485208 docker.go:233] disabling docker service ...
	I0812 12:11:55.686364  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:11:55.700694  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:11:55.714050  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:11:55.838343  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:11:55.960857  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:11:55.974783  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:11:55.993794  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:11:55.993871  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.004591  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:11:56.004677  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.015367  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.026246  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.036926  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:11:56.047567  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.058000  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.076139  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:11:56.086872  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:11:56.096377  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:11:56.096467  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:11:56.109476  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:11:56.119668  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:11:56.246639  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:11:56.404629  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:11:56.404713  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:11:56.409594  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:11:56.409656  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:11:56.413572  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:11:56.450863  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:11:56.450977  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:11:56.480838  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:11:56.512289  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:11:56.513499  485208 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:11:56.516052  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:56.516417  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:11:56.516438  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:11:56.516720  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:11:56.521033  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:11:56.534125  485208 kubeadm.go:883] updating cluster {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:11:56.534243  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:11:56.534290  485208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:11:56.565035  485208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0812 12:11:56.565136  485208 ssh_runner.go:195] Run: which lz4
	I0812 12:11:56.569041  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0812 12:11:56.569157  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 12:11:56.573362  485208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 12:11:56.573390  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0812 12:11:58.007854  485208 crio.go:462] duration metric: took 1.438727808s to copy over tarball
	I0812 12:11:58.007937  485208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 12:12:00.192513  485208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.184538664s)
	I0812 12:12:00.192549  485208 crio.go:469] duration metric: took 2.184663391s to extract the tarball
	I0812 12:12:00.192558  485208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 12:12:00.231017  485208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:12:00.281405  485208 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:12:00.281437  485208 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:12:00.281447  485208 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.30.3 crio true true} ...
	I0812 12:12:00.281589  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:12:00.281686  485208 ssh_runner.go:195] Run: crio config
	I0812 12:12:00.329283  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:12:00.329306  485208 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 12:12:00.329316  485208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:12:00.329340  485208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220134 NodeName:ha-220134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:12:00.329487  485208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220134"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:12:00.329510  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:12:00.329557  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:12:00.346734  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:12:00.346882  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:12:00.346958  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:00.357489  485208 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:12:00.357565  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 12:12:00.367309  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0812 12:12:00.383963  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:12:00.400920  485208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0812 12:12:00.417671  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0812 12:12:00.434262  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:12:00.438431  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:12:00.450706  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:12:00.579801  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:12:00.597577  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.228
	I0812 12:12:00.597603  485208 certs.go:194] generating shared ca certs ...
	I0812 12:12:00.597620  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.597789  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:12:00.597850  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:12:00.597861  485208 certs.go:256] generating profile certs ...
	I0812 12:12:00.597916  485208 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:12:00.597942  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt with IP's: []
	I0812 12:12:00.677939  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt ...
	I0812 12:12:00.677974  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt: {Name:mk9fafa446d8b28b9f7b65115def1ce5a05d4c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.678176  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key ...
	I0812 12:12:00.678194  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key: {Name:mk4353a7608a6c005e7bf75fcd414510302dc630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.678310  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3
	I0812 12:12:00.678338  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.254]
	I0812 12:12:00.762928  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 ...
	I0812 12:12:00.762959  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3: {Name:mkd955c01dada19619c74559758a76b9fc4239c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.763137  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3 ...
	I0812 12:12:00.763150  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3: {Name:mk45e5f4c537690b3c1c8e44623614717bdeb3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.763214  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.b2ef11b3 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:12:00.763282  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.b2ef11b3 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:12:00.763354  485208 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:12:00.763368  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt with IP's: []
	I0812 12:12:00.899121  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt ...
	I0812 12:12:00.899154  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt: {Name:mkeb87ac702b51eb8807073957337d78c2486afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.899327  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key ...
	I0812 12:12:00.899338  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key: {Name:mkc7e9a0b81dcf49c56951bce088c2c205615598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:00.899415  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:12:00.899431  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:12:00.899442  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:12:00.899455  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:12:00.899467  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:12:00.899479  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:12:00.899501  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:12:00.899513  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:12:00.899563  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:12:00.899605  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:12:00.899613  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:12:00.899632  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:12:00.899654  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:12:00.899676  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:12:00.899713  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:00.899742  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:12:00.899755  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:12:00.899768  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:00.900328  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:12:00.926164  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:12:00.949902  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:12:00.973957  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:12:00.999751  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 12:12:01.024997  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:12:01.052808  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:12:01.079537  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:12:01.103702  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:12:01.132532  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:12:01.158646  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:12:01.187950  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:12:01.208248  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:12:01.214687  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:12:01.226962  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.232011  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.232079  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:12:01.238440  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:12:01.249814  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:12:01.261311  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.266352  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.266405  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:12:01.272358  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:12:01.284003  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:12:01.295843  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.300572  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.300635  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:01.306539  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:12:01.318250  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:12:01.322951  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:12:01.323010  485208 kubeadm.go:392] StartCluster: {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:12:01.323088  485208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:12:01.323140  485208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:12:01.361708  485208 cri.go:89] found id: ""
	I0812 12:12:01.361800  485208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 12:12:01.374462  485208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 12:12:01.392559  485208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 12:12:01.404437  485208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 12:12:01.404455  485208 kubeadm.go:157] found existing configuration files:
	
	I0812 12:12:01.404506  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 12:12:01.415830  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 12:12:01.415917  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 12:12:01.427544  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 12:12:01.441613  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 12:12:01.441675  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 12:12:01.454912  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 12:12:01.465686  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 12:12:01.465765  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 12:12:01.475115  485208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 12:12:01.483837  485208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 12:12:01.483908  485208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 12:12:01.493066  485208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 12:12:01.600440  485208 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0812 12:12:01.600526  485208 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 12:12:01.720488  485208 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 12:12:01.720617  485208 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 12:12:01.720757  485208 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 12:12:01.964723  485208 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 12:12:02.196686  485208 out.go:204]   - Generating certificates and keys ...
	I0812 12:12:02.196824  485208 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 12:12:02.196906  485208 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 12:12:02.197568  485208 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 12:12:02.555832  485208 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 12:12:02.706304  485208 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 12:12:02.767137  485208 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 12:12:03.088184  485208 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 12:12:03.088345  485208 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-220134 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0812 12:12:03.167870  485208 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 12:12:03.168076  485208 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-220134 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0812 12:12:03.343957  485208 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 12:12:03.527996  485208 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 12:12:03.668796  485208 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 12:12:03.668976  485208 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 12:12:04.004200  485208 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 12:12:04.200658  485208 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0812 12:12:04.651462  485208 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 12:12:04.776476  485208 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 12:12:04.967615  485208 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 12:12:04.968073  485208 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 12:12:04.971286  485208 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 12:12:04.974689  485208 out.go:204]   - Booting up control plane ...
	I0812 12:12:04.974798  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 12:12:04.974867  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 12:12:04.974981  485208 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 12:12:04.991918  485208 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 12:12:04.992859  485208 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 12:12:04.992934  485208 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 12:12:05.133194  485208 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0812 12:12:05.133322  485208 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0812 12:12:05.635328  485208 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.236507ms
	I0812 12:12:05.635432  485208 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0812 12:12:11.543680  485208 kubeadm.go:310] [api-check] The API server is healthy after 5.912552105s
	I0812 12:12:11.566590  485208 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0812 12:12:11.587583  485208 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0812 12:12:11.616176  485208 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0812 12:12:11.616448  485208 kubeadm.go:310] [mark-control-plane] Marking the node ha-220134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0812 12:12:11.633573  485208 kubeadm.go:310] [bootstrap-token] Using token: ibuffq.8zx5f52ylb7rvh5p
	I0812 12:12:11.635071  485208 out.go:204]   - Configuring RBAC rules ...
	I0812 12:12:11.635253  485208 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0812 12:12:11.642314  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0812 12:12:11.653391  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0812 12:12:11.662006  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0812 12:12:11.668661  485208 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0812 12:12:11.674408  485208 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0812 12:12:11.956406  485208 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0812 12:12:12.397495  485208 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0812 12:12:12.957947  485208 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0812 12:12:12.957977  485208 kubeadm.go:310] 
	I0812 12:12:12.958055  485208 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0812 12:12:12.958065  485208 kubeadm.go:310] 
	I0812 12:12:12.958194  485208 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0812 12:12:12.958224  485208 kubeadm.go:310] 
	I0812 12:12:12.958279  485208 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0812 12:12:12.958358  485208 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0812 12:12:12.958421  485208 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0812 12:12:12.958434  485208 kubeadm.go:310] 
	I0812 12:12:12.958502  485208 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0812 12:12:12.958521  485208 kubeadm.go:310] 
	I0812 12:12:12.958597  485208 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0812 12:12:12.958607  485208 kubeadm.go:310] 
	I0812 12:12:12.958680  485208 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0812 12:12:12.958783  485208 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0812 12:12:12.958871  485208 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0812 12:12:12.958883  485208 kubeadm.go:310] 
	I0812 12:12:12.958993  485208 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0812 12:12:12.959118  485208 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0812 12:12:12.959129  485208 kubeadm.go:310] 
	I0812 12:12:12.959250  485208 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ibuffq.8zx5f52ylb7rvh5p \
	I0812 12:12:12.959394  485208 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f \
	I0812 12:12:12.959425  485208 kubeadm.go:310] 	--control-plane 
	I0812 12:12:12.959432  485208 kubeadm.go:310] 
	I0812 12:12:12.959545  485208 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0812 12:12:12.959558  485208 kubeadm.go:310] 
	I0812 12:12:12.959643  485208 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ibuffq.8zx5f52ylb7rvh5p \
	I0812 12:12:12.959791  485208 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f 
	I0812 12:12:12.959939  485208 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 12:12:12.959963  485208 cni.go:84] Creating CNI manager for ""
	I0812 12:12:12.959972  485208 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0812 12:12:12.962013  485208 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 12:12:12.963712  485208 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0812 12:12:12.969430  485208 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0812 12:12:12.969454  485208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0812 12:12:12.989869  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 12:12:13.422197  485208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 12:12:13.422338  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:13.422380  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134 minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=true
	I0812 12:12:13.453066  485208 ops.go:34] apiserver oom_adj: -16
	I0812 12:12:13.607152  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:14.108069  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:14.607602  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:15.107870  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:15.607655  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:16.107346  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:16.607784  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:17.107555  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:17.607365  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:18.107700  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:18.607848  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:19.107912  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:19.607873  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:20.107209  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:20.608203  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:21.108076  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:21.607706  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:22.107644  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:22.607642  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:23.107871  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:23.608104  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:24.107517  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:24.607231  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:25.108221  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 12:12:25.204930  485208 kubeadm.go:1113] duration metric: took 11.782675487s to wait for elevateKubeSystemPrivileges
	I0812 12:12:25.204974  485208 kubeadm.go:394] duration metric: took 23.881968454s to StartCluster
	I0812 12:12:25.204998  485208 settings.go:142] acquiring lock: {Name:mke9ed38a916e17fe99baccde568c442d70df1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:25.205115  485208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:12:25.205837  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:25.206097  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 12:12:25.206109  485208 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:12:25.206142  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:12:25.206156  485208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 12:12:25.206246  485208 addons.go:69] Setting storage-provisioner=true in profile "ha-220134"
	I0812 12:12:25.206295  485208 addons.go:234] Setting addon storage-provisioner=true in "ha-220134"
	I0812 12:12:25.206305  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:25.206330  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:25.206259  485208 addons.go:69] Setting default-storageclass=true in profile "ha-220134"
	I0812 12:12:25.206383  485208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-220134"
	I0812 12:12:25.206702  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.206753  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.206817  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.206853  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.222325  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
	I0812 12:12:25.222335  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0812 12:12:25.222893  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.222900  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.223410  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.223410  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.223437  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.223448  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.223872  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.223876  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.224071  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.224404  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.224448  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.226840  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:12:25.227237  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 12:12:25.227882  485208 cert_rotation.go:137] Starting client certificate rotation controller
	I0812 12:12:25.228157  485208 addons.go:234] Setting addon default-storageclass=true in "ha-220134"
	I0812 12:12:25.228203  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:25.228595  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.228648  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.240607  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I0812 12:12:25.241157  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.241678  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.241711  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.242030  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.242249  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.243899  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0812 12:12:25.244190  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:25.244273  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.244859  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.244889  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.245249  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.245860  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:25.245898  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:25.246480  485208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 12:12:25.247760  485208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:12:25.247789  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 12:12:25.247810  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:25.250941  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.251481  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:25.251523  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.251755  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:25.252095  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:25.252300  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:25.252451  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:25.262005  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36859
	I0812 12:12:25.262416  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:25.262950  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:25.262979  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:25.263366  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:25.263628  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:25.265034  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:25.265278  485208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 12:12:25.265294  485208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 12:12:25.265311  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:25.268020  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.268411  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:25.268435  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:25.268586  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:25.268765  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:25.268914  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:25.269112  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:25.317753  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 12:12:25.408150  485208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 12:12:25.438925  485208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 12:12:25.822035  485208 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0812 12:12:25.846330  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.846362  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.846702  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.846731  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:25.846745  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.846754  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.847037  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:25.847080  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.847099  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:25.847240  485208 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0812 12:12:25.847254  485208 round_trippers.go:469] Request Headers:
	I0812 12:12:25.847266  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:12:25.847271  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:12:25.854571  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:12:25.855182  485208 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0812 12:12:25.855198  485208 round_trippers.go:469] Request Headers:
	I0812 12:12:25.855207  485208 round_trippers.go:473]     Content-Type: application/json
	I0812 12:12:25.855212  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:12:25.855221  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:12:25.857512  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:12:25.857680  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:25.857693  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:25.857975  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:25.858025  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:25.858034  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.097820  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:26.097856  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:26.098271  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:26.098324  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:26.098332  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.098346  485208 main.go:141] libmachine: Making call to close driver server
	I0812 12:12:26.098354  485208 main.go:141] libmachine: (ha-220134) Calling .Close
	I0812 12:12:26.098632  485208 main.go:141] libmachine: Successfully made call to close driver server
	I0812 12:12:26.098654  485208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 12:12:26.098642  485208 main.go:141] libmachine: (ha-220134) DBG | Closing plugin on server side
	I0812 12:12:26.100480  485208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0812 12:12:26.101781  485208 addons.go:510] duration metric: took 895.613385ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0812 12:12:26.101836  485208 start.go:246] waiting for cluster config update ...
	I0812 12:12:26.101852  485208 start.go:255] writing updated cluster config ...
	I0812 12:12:26.103379  485208 out.go:177] 
	I0812 12:12:26.104712  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:26.104819  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:26.107048  485208 out.go:177] * Starting "ha-220134-m02" control-plane node in "ha-220134" cluster
	I0812 12:12:26.108313  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:12:26.108350  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:12:26.108464  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:12:26.108480  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:12:26.108557  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:26.108732  485208 start.go:360] acquireMachinesLock for ha-220134-m02: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:12:26.108796  485208 start.go:364] duration metric: took 43.274µs to acquireMachinesLock for "ha-220134-m02"
	I0812 12:12:26.108821  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:12:26.108927  485208 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0812 12:12:26.110341  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:12:26.110441  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:26.110469  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:26.126544  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0812 12:12:26.127053  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:26.127557  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:26.127581  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:26.127911  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:26.128171  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:26.128340  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:26.128586  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:12:26.128616  485208 client.go:168] LocalClient.Create starting
	I0812 12:12:26.128650  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:12:26.128691  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:12:26.128711  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:12:26.128778  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:12:26.128799  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:12:26.128811  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:12:26.128825  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:12:26.128833  485208 main.go:141] libmachine: (ha-220134-m02) Calling .PreCreateCheck
	I0812 12:12:26.129005  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:26.129451  485208 main.go:141] libmachine: Creating machine...
	I0812 12:12:26.129465  485208 main.go:141] libmachine: (ha-220134-m02) Calling .Create
	I0812 12:12:26.129610  485208 main.go:141] libmachine: (ha-220134-m02) Creating KVM machine...
	I0812 12:12:26.130849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found existing default KVM network
	I0812 12:12:26.130996  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found existing private KVM network mk-ha-220134
	I0812 12:12:26.131205  485208 main.go:141] libmachine: (ha-220134-m02) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 ...
	I0812 12:12:26.131238  485208 main.go:141] libmachine: (ha-220134-m02) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:12:26.131302  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.131184  485583 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:12:26.131375  485208 main.go:141] libmachine: (ha-220134-m02) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:12:26.432155  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.431990  485583 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa...
	I0812 12:12:26.836485  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.836306  485583 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/ha-220134-m02.rawdisk...
	I0812 12:12:26.836530  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Writing magic tar header
	I0812 12:12:26.836575  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 (perms=drwx------)
	I0812 12:12:26.836621  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Writing SSH key tar header
	I0812 12:12:26.836635  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:12:26.836650  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:12:26.836659  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:12:26.836670  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:12:26.836686  485208 main.go:141] libmachine: (ha-220134-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:12:26.836699  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:26.836420  485583 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02 ...
	I0812 12:12:26.836713  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02
	I0812 12:12:26.836722  485208 main.go:141] libmachine: (ha-220134-m02) Creating domain...
	I0812 12:12:26.836744  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:12:26.836760  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:12:26.836774  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:12:26.836785  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:12:26.836796  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:12:26.836808  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Checking permissions on dir: /home
	I0812 12:12:26.836822  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Skipping /home - not owner
	I0812 12:12:26.837790  485208 main.go:141] libmachine: (ha-220134-m02) define libvirt domain using xml: 
	I0812 12:12:26.837818  485208 main.go:141] libmachine: (ha-220134-m02) <domain type='kvm'>
	I0812 12:12:26.837828  485208 main.go:141] libmachine: (ha-220134-m02)   <name>ha-220134-m02</name>
	I0812 12:12:26.837837  485208 main.go:141] libmachine: (ha-220134-m02)   <memory unit='MiB'>2200</memory>
	I0812 12:12:26.837845  485208 main.go:141] libmachine: (ha-220134-m02)   <vcpu>2</vcpu>
	I0812 12:12:26.837855  485208 main.go:141] libmachine: (ha-220134-m02)   <features>
	I0812 12:12:26.837864  485208 main.go:141] libmachine: (ha-220134-m02)     <acpi/>
	I0812 12:12:26.837873  485208 main.go:141] libmachine: (ha-220134-m02)     <apic/>
	I0812 12:12:26.837881  485208 main.go:141] libmachine: (ha-220134-m02)     <pae/>
	I0812 12:12:26.837890  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.837901  485208 main.go:141] libmachine: (ha-220134-m02)   </features>
	I0812 12:12:26.837911  485208 main.go:141] libmachine: (ha-220134-m02)   <cpu mode='host-passthrough'>
	I0812 12:12:26.837921  485208 main.go:141] libmachine: (ha-220134-m02)   
	I0812 12:12:26.837934  485208 main.go:141] libmachine: (ha-220134-m02)   </cpu>
	I0812 12:12:26.837945  485208 main.go:141] libmachine: (ha-220134-m02)   <os>
	I0812 12:12:26.837955  485208 main.go:141] libmachine: (ha-220134-m02)     <type>hvm</type>
	I0812 12:12:26.837962  485208 main.go:141] libmachine: (ha-220134-m02)     <boot dev='cdrom'/>
	I0812 12:12:26.837972  485208 main.go:141] libmachine: (ha-220134-m02)     <boot dev='hd'/>
	I0812 12:12:26.837986  485208 main.go:141] libmachine: (ha-220134-m02)     <bootmenu enable='no'/>
	I0812 12:12:26.837996  485208 main.go:141] libmachine: (ha-220134-m02)   </os>
	I0812 12:12:26.838034  485208 main.go:141] libmachine: (ha-220134-m02)   <devices>
	I0812 12:12:26.838065  485208 main.go:141] libmachine: (ha-220134-m02)     <disk type='file' device='cdrom'>
	I0812 12:12:26.838086  485208 main.go:141] libmachine: (ha-220134-m02)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/boot2docker.iso'/>
	I0812 12:12:26.838097  485208 main.go:141] libmachine: (ha-220134-m02)       <target dev='hdc' bus='scsi'/>
	I0812 12:12:26.838110  485208 main.go:141] libmachine: (ha-220134-m02)       <readonly/>
	I0812 12:12:26.838120  485208 main.go:141] libmachine: (ha-220134-m02)     </disk>
	I0812 12:12:26.838130  485208 main.go:141] libmachine: (ha-220134-m02)     <disk type='file' device='disk'>
	I0812 12:12:26.838148  485208 main.go:141] libmachine: (ha-220134-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:12:26.838164  485208 main.go:141] libmachine: (ha-220134-m02)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/ha-220134-m02.rawdisk'/>
	I0812 12:12:26.838175  485208 main.go:141] libmachine: (ha-220134-m02)       <target dev='hda' bus='virtio'/>
	I0812 12:12:26.838185  485208 main.go:141] libmachine: (ha-220134-m02)     </disk>
	I0812 12:12:26.838208  485208 main.go:141] libmachine: (ha-220134-m02)     <interface type='network'>
	I0812 12:12:26.838230  485208 main.go:141] libmachine: (ha-220134-m02)       <source network='mk-ha-220134'/>
	I0812 12:12:26.838251  485208 main.go:141] libmachine: (ha-220134-m02)       <model type='virtio'/>
	I0812 12:12:26.838264  485208 main.go:141] libmachine: (ha-220134-m02)     </interface>
	I0812 12:12:26.838274  485208 main.go:141] libmachine: (ha-220134-m02)     <interface type='network'>
	I0812 12:12:26.838286  485208 main.go:141] libmachine: (ha-220134-m02)       <source network='default'/>
	I0812 12:12:26.838297  485208 main.go:141] libmachine: (ha-220134-m02)       <model type='virtio'/>
	I0812 12:12:26.838306  485208 main.go:141] libmachine: (ha-220134-m02)     </interface>
	I0812 12:12:26.838313  485208 main.go:141] libmachine: (ha-220134-m02)     <serial type='pty'>
	I0812 12:12:26.838353  485208 main.go:141] libmachine: (ha-220134-m02)       <target port='0'/>
	I0812 12:12:26.838377  485208 main.go:141] libmachine: (ha-220134-m02)     </serial>
	I0812 12:12:26.838388  485208 main.go:141] libmachine: (ha-220134-m02)     <console type='pty'>
	I0812 12:12:26.838397  485208 main.go:141] libmachine: (ha-220134-m02)       <target type='serial' port='0'/>
	I0812 12:12:26.838409  485208 main.go:141] libmachine: (ha-220134-m02)     </console>
	I0812 12:12:26.838416  485208 main.go:141] libmachine: (ha-220134-m02)     <rng model='virtio'>
	I0812 12:12:26.838429  485208 main.go:141] libmachine: (ha-220134-m02)       <backend model='random'>/dev/random</backend>
	I0812 12:12:26.838436  485208 main.go:141] libmachine: (ha-220134-m02)     </rng>
	I0812 12:12:26.838459  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.838477  485208 main.go:141] libmachine: (ha-220134-m02)     
	I0812 12:12:26.838491  485208 main.go:141] libmachine: (ha-220134-m02)   </devices>
	I0812 12:12:26.838508  485208 main.go:141] libmachine: (ha-220134-m02) </domain>
	I0812 12:12:26.838521  485208 main.go:141] libmachine: (ha-220134-m02) 
	I0812 12:12:26.846325  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:03:92:6e in network default
	I0812 12:12:26.846935  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:26.846954  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring networks are active...
	I0812 12:12:26.847833  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring network default is active
	I0812 12:12:26.848203  485208 main.go:141] libmachine: (ha-220134-m02) Ensuring network mk-ha-220134 is active
	I0812 12:12:26.848670  485208 main.go:141] libmachine: (ha-220134-m02) Getting domain xml...
	I0812 12:12:26.849472  485208 main.go:141] libmachine: (ha-220134-m02) Creating domain...
	I0812 12:12:28.117896  485208 main.go:141] libmachine: (ha-220134-m02) Waiting to get IP...
	I0812 12:12:28.118674  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.119175  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.119218  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.119155  485583 retry.go:31] will retry after 262.905369ms: waiting for machine to come up
	I0812 12:12:28.383737  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.384220  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.384247  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.384169  485583 retry.go:31] will retry after 274.17147ms: waiting for machine to come up
	I0812 12:12:28.660575  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.661106  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.661137  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.661042  485583 retry.go:31] will retry after 326.621097ms: waiting for machine to come up
	I0812 12:12:28.989757  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:28.990290  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:28.990317  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:28.990241  485583 retry.go:31] will retry after 445.162771ms: waiting for machine to come up
	I0812 12:12:29.436700  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:29.437219  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:29.437249  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:29.437167  485583 retry.go:31] will retry after 590.153733ms: waiting for machine to come up
	I0812 12:12:30.029313  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:30.029881  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:30.029912  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:30.029830  485583 retry.go:31] will retry after 932.683171ms: waiting for machine to come up
	I0812 12:12:30.964131  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:30.964693  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:30.964717  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:30.964642  485583 retry.go:31] will retry after 1.16412614s: waiting for machine to come up
	I0812 12:12:32.130419  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:32.130736  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:32.130763  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:32.130695  485583 retry.go:31] will retry after 1.362857789s: waiting for machine to come up
	I0812 12:12:33.495374  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:33.495874  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:33.495913  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:33.495802  485583 retry.go:31] will retry after 1.2101351s: waiting for machine to come up
	I0812 12:12:34.708476  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:34.709004  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:34.709034  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:34.708942  485583 retry.go:31] will retry after 1.883302747s: waiting for machine to come up
	I0812 12:12:36.594343  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:36.594849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:36.594881  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:36.594819  485583 retry.go:31] will retry after 2.391027616s: waiting for machine to come up
	I0812 12:12:38.987566  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:38.988067  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:38.988089  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:38.988028  485583 retry.go:31] will retry after 2.394690775s: waiting for machine to come up
	I0812 12:12:41.383854  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:41.384225  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:41.384255  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:41.384169  485583 retry.go:31] will retry after 3.613894384s: waiting for machine to come up
	I0812 12:12:45.002003  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:45.002449  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find current IP address of domain ha-220134-m02 in network mk-ha-220134
	I0812 12:12:45.002472  485208 main.go:141] libmachine: (ha-220134-m02) DBG | I0812 12:12:45.002405  485583 retry.go:31] will retry after 3.766857993s: waiting for machine to come up
	I0812 12:12:48.772357  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.772989  485208 main.go:141] libmachine: (ha-220134-m02) Found IP for machine: 192.168.39.215
	I0812 12:12:48.773012  485208 main.go:141] libmachine: (ha-220134-m02) Reserving static IP address...
	I0812 12:12:48.773026  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has current primary IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.773477  485208 main.go:141] libmachine: (ha-220134-m02) DBG | unable to find host DHCP lease matching {name: "ha-220134-m02", mac: "52:54:00:fc:dc:57", ip: "192.168.39.215"} in network mk-ha-220134
	I0812 12:12:48.852314  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Getting to WaitForSSH function...
	I0812 12:12:48.852359  485208 main.go:141] libmachine: (ha-220134-m02) Reserved static IP address: 192.168.39.215
	I0812 12:12:48.852373  485208 main.go:141] libmachine: (ha-220134-m02) Waiting for SSH to be available...
	I0812 12:12:48.854740  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.855205  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:48.855231  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.855419  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using SSH client type: external
	I0812 12:12:48.855447  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa (-rw-------)
	I0812 12:12:48.855508  485208 main.go:141] libmachine: (ha-220134-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:12:48.855533  485208 main.go:141] libmachine: (ha-220134-m02) DBG | About to run SSH command:
	I0812 12:12:48.855550  485208 main.go:141] libmachine: (ha-220134-m02) DBG | exit 0
	I0812 12:12:48.981611  485208 main.go:141] libmachine: (ha-220134-m02) DBG | SSH cmd err, output: <nil>: 
	I0812 12:12:48.981869  485208 main.go:141] libmachine: (ha-220134-m02) KVM machine creation complete!
	I0812 12:12:48.982242  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:48.982891  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:48.983139  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:48.983324  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:12:48.983339  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:12:48.984780  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:12:48.984799  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:12:48.984807  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:12:48.984816  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:48.987134  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.987559  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:48.987592  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:48.987724  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:48.987893  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:48.988063  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:48.988220  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:48.988403  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:48.988722  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:48.988737  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:12:49.092550  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:12:49.092574  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:12:49.092583  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.095355  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.095830  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.095857  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.096059  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.096278  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.096482  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.096693  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.096878  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.097070  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.097102  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:12:49.202432  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:12:49.202537  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:12:49.202566  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:12:49.202580  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.202928  485208 buildroot.go:166] provisioning hostname "ha-220134-m02"
	I0812 12:12:49.202965  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.203215  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.206657  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.207060  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.207105  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.207272  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.207507  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.207695  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.207865  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.208069  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.208246  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.208258  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134-m02 && echo "ha-220134-m02" | sudo tee /etc/hostname
	I0812 12:12:49.328118  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134-m02
	
	I0812 12:12:49.328173  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.331055  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.331459  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.331487  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.331685  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.331911  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.332097  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.332230  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.332422  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.332612  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.332629  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:12:49.446865  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:12:49.446910  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:12:49.446941  485208 buildroot.go:174] setting up certificates
	I0812 12:12:49.446956  485208 provision.go:84] configureAuth start
	I0812 12:12:49.446970  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetMachineName
	I0812 12:12:49.447372  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:49.450255  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.450653  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.450685  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.450864  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.453310  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.453558  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.453584  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.453732  485208 provision.go:143] copyHostCerts
	I0812 12:12:49.453761  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:12:49.453794  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:12:49.453803  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:12:49.453869  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:12:49.453963  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:12:49.453982  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:12:49.453988  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:12:49.454015  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:12:49.454092  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:12:49.454109  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:12:49.454116  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:12:49.454139  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:12:49.454222  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134-m02 san=[127.0.0.1 192.168.39.215 ha-220134-m02 localhost minikube]
	I0812 12:12:49.543100  485208 provision.go:177] copyRemoteCerts
	I0812 12:12:49.543166  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:12:49.543197  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.546099  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.546414  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.546443  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.546709  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.546929  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.547117  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.547271  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:49.632125  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:12:49.632207  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:12:49.658475  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:12:49.658555  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:12:49.683939  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:12:49.684009  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:12:49.709945  485208 provision.go:87] duration metric: took 262.97201ms to configureAuth
	I0812 12:12:49.709980  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:12:49.710159  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:49.710252  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.713109  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.713455  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.713538  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.713695  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.713907  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.714119  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.714302  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.714455  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:49.714657  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:49.714680  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:12:49.984937  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:12:49.984964  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:12:49.984973  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetURL
	I0812 12:12:49.986360  485208 main.go:141] libmachine: (ha-220134-m02) DBG | Using libvirt version 6000000
	I0812 12:12:49.988741  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.989181  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.989210  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.989401  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:12:49.989415  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:12:49.989424  485208 client.go:171] duration metric: took 23.860800317s to LocalClient.Create
	I0812 12:12:49.989452  485208 start.go:167] duration metric: took 23.860867443s to libmachine.API.Create "ha-220134"
	I0812 12:12:49.989465  485208 start.go:293] postStartSetup for "ha-220134-m02" (driver="kvm2")
	I0812 12:12:49.989481  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:12:49.989510  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:49.989775  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:12:49.989801  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:49.992084  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.992400  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:49.992425  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:49.992633  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:49.992875  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:49.993045  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:49.993189  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.076449  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:12:50.080833  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:12:50.080866  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:12:50.080940  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:12:50.081038  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:12:50.081053  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:12:50.081202  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:12:50.091441  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:50.118816  485208 start.go:296] duration metric: took 129.330027ms for postStartSetup
	I0812 12:12:50.118877  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetConfigRaw
	I0812 12:12:50.119557  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:50.122565  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.122866  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.122887  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.123226  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:12:50.123424  485208 start.go:128] duration metric: took 24.01448395s to createHost
	I0812 12:12:50.123453  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:50.125600  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.126037  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.126070  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.126232  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.126402  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.126604  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.126757  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.126928  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:12:50.127093  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0812 12:12:50.127104  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:12:50.234361  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464770.206721376
	
	I0812 12:12:50.234390  485208 fix.go:216] guest clock: 1723464770.206721376
	I0812 12:12:50.234398  485208 fix.go:229] Guest: 2024-08-12 12:12:50.206721376 +0000 UTC Remote: 2024-08-12 12:12:50.123437393 +0000 UTC m=+76.974012260 (delta=83.283983ms)
	I0812 12:12:50.234416  485208 fix.go:200] guest clock delta is within tolerance: 83.283983ms
	I0812 12:12:50.234421  485208 start.go:83] releasing machines lock for "ha-220134-m02", held for 24.125613567s
	I0812 12:12:50.234440  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.234724  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:50.237266  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.237599  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.237630  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.240221  485208 out.go:177] * Found network options:
	I0812 12:12:50.242077  485208 out.go:177]   - NO_PROXY=192.168.39.228
	W0812 12:12:50.243527  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:12:50.243567  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244201  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244431  485208 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:12:50.244531  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:12:50.244579  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	W0812 12:12:50.244741  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:12:50.244830  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:12:50.244860  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:12:50.247473  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.247819  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.247897  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.247931  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.248060  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.248147  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:50.248170  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:50.248224  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.248392  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:12:50.248404  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.248655  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.248656  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:12:50.248918  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:12:50.249177  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:12:50.482875  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:12:50.490758  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:12:50.490864  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:12:50.509960  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:12:50.509991  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:12:50.510078  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:12:50.527618  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:12:50.543614  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:12:50.543694  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:12:50.559822  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:12:50.576001  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:12:50.715009  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:12:50.866600  485208 docker.go:233] disabling docker service ...
	I0812 12:12:50.866685  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:12:50.881392  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:12:50.894903  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:12:51.040092  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:12:51.181349  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:12:51.205762  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:12:51.226430  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:12:51.226502  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.238815  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:12:51.238893  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.250801  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.262713  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.274193  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:12:51.285788  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.297333  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.316344  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:12:51.327609  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:12:51.337347  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:12:51.337412  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:12:51.351439  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:12:51.361192  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:12:51.473284  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:12:51.613515  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:12:51.613600  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:12:51.618562  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:12:51.618632  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:12:51.622753  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:12:51.661539  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:12:51.661617  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:12:51.689874  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:12:51.724170  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:12:51.725594  485208 out.go:177]   - env NO_PROXY=192.168.39.228
	I0812 12:12:51.726774  485208 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:12:51.729472  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:51.729817  485208 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:12:41 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:12:51.729849  485208 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:12:51.730089  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:12:51.734707  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:12:51.747065  485208 mustload.go:65] Loading cluster: ha-220134
	I0812 12:12:51.747331  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:12:51.747703  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:51.747737  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:51.762680  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0812 12:12:51.763169  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:51.763671  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:51.763694  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:51.764001  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:51.764187  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:12:51.765663  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:12:51.765958  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:12:51.765980  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:12:51.781778  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0812 12:12:51.782195  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:12:51.782678  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:12:51.782703  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:12:51.783090  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:12:51.783342  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:12:51.783513  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.215
	I0812 12:12:51.783523  485208 certs.go:194] generating shared ca certs ...
	I0812 12:12:51.783537  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.783666  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:12:51.783721  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:12:51.783735  485208 certs.go:256] generating profile certs ...
	I0812 12:12:51.783835  485208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:12:51.783869  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297
	I0812 12:12:51.783885  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.254]
	I0812 12:12:51.989980  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 ...
	I0812 12:12:51.990015  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297: {Name:mk904ce98edd04e7af847e314a39147bd4943a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.990196  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297 ...
	I0812 12:12:51.990210  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297: {Name:mk70d2b31dca95723cdb80442908c3afbe83d830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:12:51.990282  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.12852297 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:12:51.990416  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.12852297 -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:12:51.990547  485208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:12:51.990565  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:12:51.990579  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:12:51.990590  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:12:51.990600  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:12:51.990610  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:12:51.990620  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:12:51.990628  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:12:51.990638  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:12:51.990685  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:12:51.990716  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:12:51.990726  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:12:51.990746  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:12:51.990767  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:12:51.990797  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:12:51.990844  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:12:51.990870  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:12:51.990884  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:51.990896  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:12:51.990929  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:12:51.994763  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:51.995295  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:12:51.995330  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:12:51.995544  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:12:51.995809  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:12:51.996017  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:12:51.996163  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:12:52.069573  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 12:12:52.076273  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 12:12:52.090470  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 12:12:52.095417  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0812 12:12:52.108460  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 12:12:52.113028  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 12:12:52.123582  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 12:12:52.127631  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 12:12:52.137793  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 12:12:52.141990  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 12:12:52.152553  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 12:12:52.156755  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 12:12:52.167863  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:12:52.193373  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:12:52.217491  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:12:52.242998  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:12:52.268943  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 12:12:52.295572  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:12:52.322283  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:12:52.349270  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:12:52.378251  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:12:52.404955  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:12:52.430391  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:12:52.454957  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 12:12:52.472014  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0812 12:12:52.488709  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 12:12:52.507654  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 12:12:52.526989  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 12:12:52.546197  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 12:12:52.564842  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 12:12:52.581674  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:12:52.587894  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:12:52.598823  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.603491  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.603546  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:12:52.609588  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:12:52.620220  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:12:52.630859  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.635387  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.635454  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:12:52.641135  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:12:52.652479  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:12:52.663804  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.668546  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.668604  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:12:52.674245  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:12:52.685130  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:12:52.689571  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:12:52.689644  485208 kubeadm.go:934] updating node {m02 192.168.39.215 8443 v1.30.3 crio true true} ...
	I0812 12:12:52.689755  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:12:52.689778  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:12:52.689811  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:12:52.707169  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:12:52.707253  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:12:52.707327  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:52.717451  485208 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 12:12:52.717533  485208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 12:12:52.727228  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 12:12:52.727257  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:12:52.727352  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:12:52.727377  485208 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0812 12:12:52.727350  485208 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0812 12:12:52.731639  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 12:12:52.731666  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 12:13:24.584364  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:13:24.584507  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:13:24.590913  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 12:13:24.590951  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 12:13:59.119046  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:13:59.135633  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:13:59.135772  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:13:59.141241  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 12:13:59.141281  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 12:13:59.572488  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 12:13:59.582780  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 12:13:59.600764  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:13:59.619020  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:13:59.636410  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:13:59.641212  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:13:59.654356  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:13:59.765868  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:13:59.783638  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:13:59.784000  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:13:59.784028  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:13:59.801445  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0812 12:13:59.802040  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:13:59.802584  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:13:59.802607  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:13:59.803018  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:13:59.803232  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:13:59.803394  485208 start.go:317] joinCluster: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:13:59.803498  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 12:13:59.803514  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:13:59.806558  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:13:59.806974  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:13:59.807004  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:13:59.807223  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:13:59.807396  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:13:59.807587  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:13:59.807773  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:13:59.971957  485208 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:13:59.972017  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dgsrck.rcblur08bhwjdf3e --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m02 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443"
	I0812 12:14:22.462475  485208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dgsrck.rcblur08bhwjdf3e --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m02 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443": (22.490400346s)
	I0812 12:14:22.462526  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 12:14:23.084681  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134-m02 minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=false
	I0812 12:14:23.223450  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220134-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 12:14:23.337117  485208 start.go:319] duration metric: took 23.533715173s to joinCluster
	I0812 12:14:23.337212  485208 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:14:23.337568  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:14:23.338740  485208 out.go:177] * Verifying Kubernetes components...
	I0812 12:14:23.340085  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:14:23.582801  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:14:23.617323  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:14:23.617691  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 12:14:23.617787  485208 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.228:8443
	I0812 12:14:23.618108  485208 node_ready.go:35] waiting up to 6m0s for node "ha-220134-m02" to be "Ready" ...
	I0812 12:14:23.618245  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:23.618256  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:23.618272  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:23.618280  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:23.631756  485208 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0812 12:14:24.118359  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:24.118394  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:24.118406  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:24.118411  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:24.122009  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:24.619036  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:24.619059  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:24.619073  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:24.619078  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:24.622653  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.119070  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:25.119097  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:25.119106  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:25.119111  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:25.122712  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.618850  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:25.618881  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:25.618893  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:25.618899  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:25.622198  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:25.622660  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:26.119272  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:26.119296  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:26.119305  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:26.119309  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:26.123464  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:26.618980  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:26.619005  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:26.619014  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:26.619019  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:26.624758  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:27.118647  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:27.118672  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:27.118680  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:27.118684  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:27.122668  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:27.618399  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:27.618427  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:27.618437  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:27.618441  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:27.621763  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:28.118726  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:28.118750  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:28.118759  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:28.118763  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:28.122740  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:28.123486  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:28.618568  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:28.618598  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:28.618609  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:28.618613  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:28.622521  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:29.118432  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:29.118460  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:29.118469  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:29.118474  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:29.122424  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:29.618631  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:29.618659  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:29.618671  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:29.618679  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:29.622870  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:30.118349  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:30.118371  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:30.118380  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:30.118385  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:30.125291  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:14:30.126391  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:30.618803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:30.618828  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:30.618836  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:30.618840  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:30.622124  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:31.118776  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:31.118800  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:31.118808  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:31.118814  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:31.122290  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:31.618666  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:31.618692  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:31.618700  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:31.618704  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:31.622778  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:32.118883  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:32.118908  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:32.118917  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:32.118921  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:32.122690  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:32.618565  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:32.618592  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:32.618602  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:32.618610  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:32.624694  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:14:32.625151  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:33.118572  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:33.118598  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:33.118607  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:33.118611  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:33.122028  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:33.618591  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:33.618614  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:33.618624  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:33.618629  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:33.622009  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:34.118480  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:34.118506  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:34.118515  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:34.118518  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:34.122091  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:34.619213  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:34.619239  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:34.619248  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:34.619252  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:34.623144  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:35.118522  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:35.118556  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:35.118567  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:35.118574  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:35.123720  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:35.124464  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:35.619021  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:35.619051  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:35.619062  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:35.619069  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:35.624416  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:36.118357  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:36.118380  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:36.118391  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:36.118394  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:36.121777  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:36.618331  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:36.618355  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:36.618364  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:36.618369  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:36.622317  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.118590  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:37.118616  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:37.118623  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:37.118628  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:37.122219  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.618339  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:37.618366  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:37.618374  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:37.618377  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:37.622282  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:37.623098  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:38.118359  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:38.118389  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:38.118399  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:38.118405  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:38.122266  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:38.618381  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:38.618407  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:38.618415  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:38.618420  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:38.622698  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:39.118852  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:39.118878  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:39.118887  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:39.118891  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:39.122666  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:39.618864  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:39.618898  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:39.618908  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:39.618915  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:39.622125  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:40.118768  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:40.118800  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:40.118817  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:40.118823  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:40.122317  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:40.122824  485208 node_ready.go:53] node "ha-220134-m02" has status "Ready":"False"
	I0812 12:14:40.619331  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:40.619362  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:40.619375  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:40.619380  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:40.622748  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:41.118780  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:41.118810  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:41.118821  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:41.118829  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:41.122815  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:41.618552  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:41.618579  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:41.618589  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:41.618597  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:41.622268  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.118438  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.118474  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.118485  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.118492  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.121817  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.122503  485208 node_ready.go:49] node "ha-220134-m02" has status "Ready":"True"
	I0812 12:14:42.122528  485208 node_ready.go:38] duration metric: took 18.504397722s for node "ha-220134-m02" to be "Ready" ...
	I0812 12:14:42.122542  485208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:14:42.122634  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:42.122649  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.122660  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.122670  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.127753  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:42.134490  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.134615  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mtqtk
	I0812 12:14:42.134629  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.134640  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.134646  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.137835  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.138797  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.138816  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.138826  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.138832  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.141438  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.142011  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.142026  485208 pod_ready.go:81] duration metric: took 7.499039ms for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.142038  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.142104  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t8pg7
	I0812 12:14:42.142113  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.142120  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.142124  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.145303  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.146138  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.146160  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.146170  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.146176  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.150866  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:42.151425  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.151447  485208 pod_ready.go:81] duration metric: took 9.399509ms for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.151457  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.151518  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134
	I0812 12:14:42.151527  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.151534  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.151537  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.154655  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.155164  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.155180  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.155187  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.155191  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.157554  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.158099  485208 pod_ready.go:92] pod "etcd-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.158119  485208 pod_ready.go:81] duration metric: took 6.655004ms for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.158131  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.158256  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m02
	I0812 12:14:42.158269  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.158277  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.158282  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.160828  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.161508  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.161528  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.161538  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.161545  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.164082  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:14:42.164525  485208 pod_ready.go:92] pod "etcd-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.164552  485208 pod_ready.go:81] duration metric: took 6.412866ms for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.164575  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.319083  485208 request.go:629] Waited for 154.40374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:14:42.319182  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:14:42.319190  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.319205  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.319214  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.322923  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.519186  485208 request.go:629] Waited for 195.458039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.519273  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:42.519279  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.519286  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.519290  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.522936  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.523668  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.523700  485208 pod_ready.go:81] duration metric: took 359.109868ms for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.523714  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.718822  485208 request.go:629] Waited for 195.000146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:14:42.718905  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:14:42.718911  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.718920  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.718929  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.722637  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.918759  485208 request.go:629] Waited for 195.425883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.918827  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:42.918835  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:42.918843  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:42.918849  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:42.922131  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:42.922847  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:42.922869  485208 pod_ready.go:81] duration metric: took 399.143428ms for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:42.922881  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.119019  485208 request.go:629] Waited for 196.034578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:14:43.119100  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:14:43.119108  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.119120  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.119132  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.123174  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:43.319490  485208 request.go:629] Waited for 195.267129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:43.319565  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:43.319574  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.319582  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.319589  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.322678  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.323332  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:43.323359  485208 pod_ready.go:81] duration metric: took 400.471136ms for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.323370  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.519331  485208 request.go:629] Waited for 195.852908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:14:43.519430  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:14:43.519442  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.519452  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.519460  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.523238  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.719361  485208 request.go:629] Waited for 195.430203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:43.719464  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:43.719470  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.719477  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.719482  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.723467  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:43.723958  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:43.723977  485208 pod_ready.go:81] duration metric: took 400.601195ms for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.723987  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:43.919229  485208 request.go:629] Waited for 195.141841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:14:43.919322  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:14:43.919330  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:43.919342  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:43.919352  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:43.922963  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.119062  485208 request.go:629] Waited for 195.406086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:44.119151  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:44.119159  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.119178  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.119201  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.122989  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.123805  485208 pod_ready.go:92] pod "kube-proxy-bs72f" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.123824  485208 pod_ready.go:81] duration metric: took 399.831421ms for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.123834  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.318966  485208 request.go:629] Waited for 195.049756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:14:44.319089  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:14:44.319102  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.319112  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.319123  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.322640  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.518604  485208 request.go:629] Waited for 195.303631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.518675  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.518681  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.518694  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.518701  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.522291  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.522947  485208 pod_ready.go:92] pod "kube-proxy-zcgh8" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.522970  485208 pod_ready.go:81] duration metric: took 399.128934ms for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.522985  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.718944  485208 request.go:629] Waited for 195.868915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:14:44.719010  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:14:44.719016  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.719028  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.719035  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.722951  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:44.919343  485208 request.go:629] Waited for 195.527273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.919418  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:14:44.919425  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:44.919433  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:44.919437  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:44.924372  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:14:44.924861  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:44.924882  485208 pod_ready.go:81] duration metric: took 401.890241ms for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:44.924891  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:45.118952  485208 request.go:629] Waited for 193.966343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:14:45.119032  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:14:45.119037  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.119051  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.119056  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.122400  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:45.319159  485208 request.go:629] Waited for 196.184111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:45.319258  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:14:45.319267  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.319279  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.319291  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.332040  485208 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0812 12:14:45.332524  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:14:45.332557  485208 pod_ready.go:81] duration metric: took 407.658979ms for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:14:45.332568  485208 pod_ready.go:38] duration metric: took 3.210007717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:14:45.332589  485208 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:14:45.332653  485208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:14:45.353986  485208 api_server.go:72] duration metric: took 22.016705095s to wait for apiserver process to appear ...
	I0812 12:14:45.354022  485208 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:14:45.354051  485208 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0812 12:14:45.366431  485208 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0812 12:14:45.366536  485208 round_trippers.go:463] GET https://192.168.39.228:8443/version
	I0812 12:14:45.366546  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.366558  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.366568  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.367697  485208 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0812 12:14:45.367860  485208 api_server.go:141] control plane version: v1.30.3
	I0812 12:14:45.367885  485208 api_server.go:131] duration metric: took 13.854938ms to wait for apiserver health ...
	I0812 12:14:45.367894  485208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:14:45.519121  485208 request.go:629] Waited for 151.144135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.519186  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.519191  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.519205  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.519210  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.524373  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:14:45.529010  485208 system_pods.go:59] 17 kube-system pods found
	I0812 12:14:45.529045  485208 system_pods.go:61] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:14:45.529051  485208 system_pods.go:61] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:14:45.529055  485208 system_pods.go:61] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:14:45.529058  485208 system_pods.go:61] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:14:45.529061  485208 system_pods.go:61] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:14:45.529065  485208 system_pods.go:61] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:14:45.529068  485208 system_pods.go:61] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:14:45.529071  485208 system_pods.go:61] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:14:45.529076  485208 system_pods.go:61] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:14:45.529090  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:14:45.529098  485208 system_pods.go:61] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:14:45.529103  485208 system_pods.go:61] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:14:45.529112  485208 system_pods.go:61] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:14:45.529117  485208 system_pods.go:61] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:14:45.529124  485208 system_pods.go:61] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:14:45.529129  485208 system_pods.go:61] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:14:45.529133  485208 system_pods.go:61] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:14:45.529139  485208 system_pods.go:74] duration metric: took 161.238707ms to wait for pod list to return data ...
	I0812 12:14:45.529150  485208 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:14:45.718564  485208 request.go:629] Waited for 189.321436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:14:45.718696  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:14:45.718708  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.718716  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.718722  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.722424  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:45.722678  485208 default_sa.go:45] found service account: "default"
	I0812 12:14:45.722695  485208 default_sa.go:55] duration metric: took 193.536981ms for default service account to be created ...
	I0812 12:14:45.722704  485208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:14:45.919028  485208 request.go:629] Waited for 196.232627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.919104  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:14:45.919112  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:45.919122  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:45.919131  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:45.928358  485208 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 12:14:45.933300  485208 system_pods.go:86] 17 kube-system pods found
	I0812 12:14:45.933332  485208 system_pods.go:89] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:14:45.933338  485208 system_pods.go:89] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:14:45.933342  485208 system_pods.go:89] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:14:45.933346  485208 system_pods.go:89] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:14:45.933350  485208 system_pods.go:89] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:14:45.933355  485208 system_pods.go:89] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:14:45.933359  485208 system_pods.go:89] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:14:45.933363  485208 system_pods.go:89] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:14:45.933367  485208 system_pods.go:89] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:14:45.933371  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:14:45.933375  485208 system_pods.go:89] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:14:45.933378  485208 system_pods.go:89] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:14:45.933382  485208 system_pods.go:89] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:14:45.933387  485208 system_pods.go:89] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:14:45.933391  485208 system_pods.go:89] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:14:45.933394  485208 system_pods.go:89] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:14:45.933398  485208 system_pods.go:89] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:14:45.933405  485208 system_pods.go:126] duration metric: took 210.695106ms to wait for k8s-apps to be running ...
	I0812 12:14:45.933414  485208 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:14:45.933465  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:14:45.952301  485208 system_svc.go:56] duration metric: took 18.873436ms WaitForService to wait for kubelet
	I0812 12:14:45.952333  485208 kubeadm.go:582] duration metric: took 22.615059023s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:14:45.952354  485208 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:14:46.118844  485208 request.go:629] Waited for 166.394903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes
	I0812 12:14:46.118934  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes
	I0812 12:14:46.118939  485208 round_trippers.go:469] Request Headers:
	I0812 12:14:46.118947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:14:46.118952  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:14:46.122551  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:14:46.123303  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:14:46.123344  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:14:46.123380  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:14:46.123387  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:14:46.123398  485208 node_conditions.go:105] duration metric: took 171.038039ms to run NodePressure ...
	I0812 12:14:46.123418  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:14:46.123468  485208 start.go:255] writing updated cluster config ...
	I0812 12:14:46.125754  485208 out.go:177] 
	I0812 12:14:46.127730  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:14:46.127883  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:14:46.129594  485208 out.go:177] * Starting "ha-220134-m03" control-plane node in "ha-220134" cluster
	I0812 12:14:46.131036  485208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:14:46.131075  485208 cache.go:56] Caching tarball of preloaded images
	I0812 12:14:46.131203  485208 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:14:46.131219  485208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:14:46.131350  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:14:46.132144  485208 start.go:360] acquireMachinesLock for ha-220134-m03: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:14:46.132212  485208 start.go:364] duration metric: took 32.881µs to acquireMachinesLock for "ha-220134-m03"
	I0812 12:14:46.132232  485208 start.go:93] Provisioning new machine with config: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:14:46.132423  485208 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0812 12:14:46.134079  485208 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 12:14:46.134186  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:14:46.134227  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:14:46.150478  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0812 12:14:46.150898  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:14:46.151443  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:14:46.151469  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:14:46.151813  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:14:46.152048  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:14:46.152271  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:14:46.152435  485208 start.go:159] libmachine.API.Create for "ha-220134" (driver="kvm2")
	I0812 12:14:46.152466  485208 client.go:168] LocalClient.Create starting
	I0812 12:14:46.152506  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 12:14:46.152558  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:14:46.152581  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:14:46.152654  485208 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 12:14:46.152682  485208 main.go:141] libmachine: Decoding PEM data...
	I0812 12:14:46.152698  485208 main.go:141] libmachine: Parsing certificate...
	I0812 12:14:46.152723  485208 main.go:141] libmachine: Running pre-create checks...
	I0812 12:14:46.152735  485208 main.go:141] libmachine: (ha-220134-m03) Calling .PreCreateCheck
	I0812 12:14:46.152913  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:14:46.153323  485208 main.go:141] libmachine: Creating machine...
	I0812 12:14:46.153339  485208 main.go:141] libmachine: (ha-220134-m03) Calling .Create
	I0812 12:14:46.153465  485208 main.go:141] libmachine: (ha-220134-m03) Creating KVM machine...
	I0812 12:14:46.154675  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found existing default KVM network
	I0812 12:14:46.154783  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found existing private KVM network mk-ha-220134
	I0812 12:14:46.154915  485208 main.go:141] libmachine: (ha-220134-m03) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 ...
	I0812 12:14:46.154931  485208 main.go:141] libmachine: (ha-220134-m03) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 12:14:46.154994  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.154920  486241 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:14:46.155107  485208 main.go:141] libmachine: (ha-220134-m03) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 12:14:46.441625  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.441499  486241 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa...
	I0812 12:14:46.630286  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.630122  486241 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/ha-220134-m03.rawdisk...
	I0812 12:14:46.630322  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Writing magic tar header
	I0812 12:14:46.630337  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Writing SSH key tar header
	I0812 12:14:46.630352  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:46.630263  486241 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 ...
	I0812 12:14:46.630537  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03
	I0812 12:14:46.630663  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03 (perms=drwx------)
	I0812 12:14:46.630680  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 12:14:46.630701  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:14:46.630710  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 12:14:46.630720  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 12:14:46.630747  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 12:14:46.630765  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home/jenkins
	I0812 12:14:46.630778  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Checking permissions on dir: /home
	I0812 12:14:46.630785  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Skipping /home - not owner
	I0812 12:14:46.630801  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 12:14:46.630810  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 12:14:46.630821  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 12:14:46.630830  485208 main.go:141] libmachine: (ha-220134-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 12:14:46.630838  485208 main.go:141] libmachine: (ha-220134-m03) Creating domain...
	I0812 12:14:46.631887  485208 main.go:141] libmachine: (ha-220134-m03) define libvirt domain using xml: 
	I0812 12:14:46.631906  485208 main.go:141] libmachine: (ha-220134-m03) <domain type='kvm'>
	I0812 12:14:46.631917  485208 main.go:141] libmachine: (ha-220134-m03)   <name>ha-220134-m03</name>
	I0812 12:14:46.631925  485208 main.go:141] libmachine: (ha-220134-m03)   <memory unit='MiB'>2200</memory>
	I0812 12:14:46.631932  485208 main.go:141] libmachine: (ha-220134-m03)   <vcpu>2</vcpu>
	I0812 12:14:46.631939  485208 main.go:141] libmachine: (ha-220134-m03)   <features>
	I0812 12:14:46.631948  485208 main.go:141] libmachine: (ha-220134-m03)     <acpi/>
	I0812 12:14:46.631956  485208 main.go:141] libmachine: (ha-220134-m03)     <apic/>
	I0812 12:14:46.631966  485208 main.go:141] libmachine: (ha-220134-m03)     <pae/>
	I0812 12:14:46.631976  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632009  485208 main.go:141] libmachine: (ha-220134-m03)   </features>
	I0812 12:14:46.632034  485208 main.go:141] libmachine: (ha-220134-m03)   <cpu mode='host-passthrough'>
	I0812 12:14:46.632059  485208 main.go:141] libmachine: (ha-220134-m03)   
	I0812 12:14:46.632084  485208 main.go:141] libmachine: (ha-220134-m03)   </cpu>
	I0812 12:14:46.632094  485208 main.go:141] libmachine: (ha-220134-m03)   <os>
	I0812 12:14:46.632101  485208 main.go:141] libmachine: (ha-220134-m03)     <type>hvm</type>
	I0812 12:14:46.632109  485208 main.go:141] libmachine: (ha-220134-m03)     <boot dev='cdrom'/>
	I0812 12:14:46.632114  485208 main.go:141] libmachine: (ha-220134-m03)     <boot dev='hd'/>
	I0812 12:14:46.632120  485208 main.go:141] libmachine: (ha-220134-m03)     <bootmenu enable='no'/>
	I0812 12:14:46.632127  485208 main.go:141] libmachine: (ha-220134-m03)   </os>
	I0812 12:14:46.632133  485208 main.go:141] libmachine: (ha-220134-m03)   <devices>
	I0812 12:14:46.632138  485208 main.go:141] libmachine: (ha-220134-m03)     <disk type='file' device='cdrom'>
	I0812 12:14:46.632146  485208 main.go:141] libmachine: (ha-220134-m03)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/boot2docker.iso'/>
	I0812 12:14:46.632158  485208 main.go:141] libmachine: (ha-220134-m03)       <target dev='hdc' bus='scsi'/>
	I0812 12:14:46.632167  485208 main.go:141] libmachine: (ha-220134-m03)       <readonly/>
	I0812 12:14:46.632177  485208 main.go:141] libmachine: (ha-220134-m03)     </disk>
	I0812 12:14:46.632186  485208 main.go:141] libmachine: (ha-220134-m03)     <disk type='file' device='disk'>
	I0812 12:14:46.632197  485208 main.go:141] libmachine: (ha-220134-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 12:14:46.632208  485208 main.go:141] libmachine: (ha-220134-m03)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/ha-220134-m03.rawdisk'/>
	I0812 12:14:46.632216  485208 main.go:141] libmachine: (ha-220134-m03)       <target dev='hda' bus='virtio'/>
	I0812 12:14:46.632224  485208 main.go:141] libmachine: (ha-220134-m03)     </disk>
	I0812 12:14:46.632229  485208 main.go:141] libmachine: (ha-220134-m03)     <interface type='network'>
	I0812 12:14:46.632237  485208 main.go:141] libmachine: (ha-220134-m03)       <source network='mk-ha-220134'/>
	I0812 12:14:46.632242  485208 main.go:141] libmachine: (ha-220134-m03)       <model type='virtio'/>
	I0812 12:14:46.632248  485208 main.go:141] libmachine: (ha-220134-m03)     </interface>
	I0812 12:14:46.632255  485208 main.go:141] libmachine: (ha-220134-m03)     <interface type='network'>
	I0812 12:14:46.632260  485208 main.go:141] libmachine: (ha-220134-m03)       <source network='default'/>
	I0812 12:14:46.632267  485208 main.go:141] libmachine: (ha-220134-m03)       <model type='virtio'/>
	I0812 12:14:46.632300  485208 main.go:141] libmachine: (ha-220134-m03)     </interface>
	I0812 12:14:46.632326  485208 main.go:141] libmachine: (ha-220134-m03)     <serial type='pty'>
	I0812 12:14:46.632336  485208 main.go:141] libmachine: (ha-220134-m03)       <target port='0'/>
	I0812 12:14:46.632345  485208 main.go:141] libmachine: (ha-220134-m03)     </serial>
	I0812 12:14:46.632354  485208 main.go:141] libmachine: (ha-220134-m03)     <console type='pty'>
	I0812 12:14:46.632364  485208 main.go:141] libmachine: (ha-220134-m03)       <target type='serial' port='0'/>
	I0812 12:14:46.632373  485208 main.go:141] libmachine: (ha-220134-m03)     </console>
	I0812 12:14:46.632383  485208 main.go:141] libmachine: (ha-220134-m03)     <rng model='virtio'>
	I0812 12:14:46.632393  485208 main.go:141] libmachine: (ha-220134-m03)       <backend model='random'>/dev/random</backend>
	I0812 12:14:46.632407  485208 main.go:141] libmachine: (ha-220134-m03)     </rng>
	I0812 12:14:46.632419  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632428  485208 main.go:141] libmachine: (ha-220134-m03)     
	I0812 12:14:46.632436  485208 main.go:141] libmachine: (ha-220134-m03)   </devices>
	I0812 12:14:46.632447  485208 main.go:141] libmachine: (ha-220134-m03) </domain>
	I0812 12:14:46.632461  485208 main.go:141] libmachine: (ha-220134-m03) 
	I0812 12:14:46.639821  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:44:47:08 in network default
	I0812 12:14:46.640512  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring networks are active...
	I0812 12:14:46.640535  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:46.641535  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring network default is active
	I0812 12:14:46.641898  485208 main.go:141] libmachine: (ha-220134-m03) Ensuring network mk-ha-220134 is active
	I0812 12:14:46.642359  485208 main.go:141] libmachine: (ha-220134-m03) Getting domain xml...
	I0812 12:14:46.643166  485208 main.go:141] libmachine: (ha-220134-m03) Creating domain...
	I0812 12:14:47.884575  485208 main.go:141] libmachine: (ha-220134-m03) Waiting to get IP...
	I0812 12:14:47.885445  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:47.885899  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:47.885971  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:47.885924  486241 retry.go:31] will retry after 188.796368ms: waiting for machine to come up
	I0812 12:14:48.076663  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.077201  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.077238  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.077133  486241 retry.go:31] will retry after 370.309742ms: waiting for machine to come up
	I0812 12:14:48.448719  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.449208  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.449238  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.449178  486241 retry.go:31] will retry after 362.104049ms: waiting for machine to come up
	I0812 12:14:48.812749  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:48.813248  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:48.813277  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:48.813192  486241 retry.go:31] will retry after 420.630348ms: waiting for machine to come up
	I0812 12:14:49.236077  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:49.236649  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:49.236689  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:49.236595  486241 retry.go:31] will retry after 508.154573ms: waiting for machine to come up
	I0812 12:14:49.746293  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:49.746809  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:49.746841  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:49.746748  486241 retry.go:31] will retry after 838.157149ms: waiting for machine to come up
	I0812 12:14:50.586377  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:50.586929  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:50.586961  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:50.586882  486241 retry.go:31] will retry after 851.729786ms: waiting for machine to come up
	I0812 12:14:51.440568  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:51.441091  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:51.441130  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:51.441032  486241 retry.go:31] will retry after 1.010425115s: waiting for machine to come up
	I0812 12:14:52.452738  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:52.453261  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:52.453294  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:52.453174  486241 retry.go:31] will retry after 1.424809996s: waiting for machine to come up
	I0812 12:14:53.879589  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:53.880112  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:53.880146  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:53.880052  486241 retry.go:31] will retry after 1.51155576s: waiting for machine to come up
	I0812 12:14:55.393922  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:55.394399  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:55.394433  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:55.394321  486241 retry.go:31] will retry after 2.74908064s: waiting for machine to come up
	I0812 12:14:58.144733  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:14:58.145236  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:14:58.145269  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:14:58.145177  486241 retry.go:31] will retry after 3.0862077s: waiting for machine to come up
	I0812 12:15:01.233615  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:01.234213  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:15:01.234247  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:15:01.234160  486241 retry.go:31] will retry after 3.24342849s: waiting for machine to come up
	I0812 12:15:04.480919  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:04.481316  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find current IP address of domain ha-220134-m03 in network mk-ha-220134
	I0812 12:15:04.481346  485208 main.go:141] libmachine: (ha-220134-m03) DBG | I0812 12:15:04.481266  486241 retry.go:31] will retry after 4.361114987s: waiting for machine to come up
	I0812 12:15:08.844313  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.845037  485208 main.go:141] libmachine: (ha-220134-m03) Found IP for machine: 192.168.39.186
	I0812 12:15:08.845075  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has current primary IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.845107  485208 main.go:141] libmachine: (ha-220134-m03) Reserving static IP address...
	I0812 12:15:08.845427  485208 main.go:141] libmachine: (ha-220134-m03) DBG | unable to find host DHCP lease matching {name: "ha-220134-m03", mac: "52:54:00:dc:00:32", ip: "192.168.39.186"} in network mk-ha-220134
	I0812 12:15:08.928064  485208 main.go:141] libmachine: (ha-220134-m03) Reserved static IP address: 192.168.39.186
	I0812 12:15:08.928112  485208 main.go:141] libmachine: (ha-220134-m03) Waiting for SSH to be available...
	I0812 12:15:08.928125  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Getting to WaitForSSH function...
	I0812 12:15:08.931087  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.931624  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:08.931659  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:08.931857  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using SSH client type: external
	I0812 12:15:08.931886  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa (-rw-------)
	I0812 12:15:08.931919  485208 main.go:141] libmachine: (ha-220134-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 12:15:08.931934  485208 main.go:141] libmachine: (ha-220134-m03) DBG | About to run SSH command:
	I0812 12:15:08.931947  485208 main.go:141] libmachine: (ha-220134-m03) DBG | exit 0
	I0812 12:15:09.057066  485208 main.go:141] libmachine: (ha-220134-m03) DBG | SSH cmd err, output: <nil>: 
	I0812 12:15:09.057378  485208 main.go:141] libmachine: (ha-220134-m03) KVM machine creation complete!
	I0812 12:15:09.057743  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:15:09.058284  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:09.058473  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:09.058639  485208 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 12:15:09.058655  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:15:09.060036  485208 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 12:15:09.060052  485208 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 12:15:09.060057  485208 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 12:15:09.060063  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.062560  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.062955  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.062984  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.063145  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.063299  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.063487  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.063662  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.063832  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.064051  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.064063  485208 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 12:15:09.172538  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:15:09.172569  485208 main.go:141] libmachine: Detecting the provisioner...
	I0812 12:15:09.172578  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.175739  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.176177  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.176205  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.176341  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.176640  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.176853  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.177009  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.177253  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.177425  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.177439  485208 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 12:15:09.286107  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 12:15:09.286183  485208 main.go:141] libmachine: found compatible host: buildroot
	I0812 12:15:09.286194  485208 main.go:141] libmachine: Provisioning with buildroot...
	I0812 12:15:09.286205  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.286489  485208 buildroot.go:166] provisioning hostname "ha-220134-m03"
	I0812 12:15:09.286529  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.286740  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.289861  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.290324  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.290361  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.290544  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.290733  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.290906  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.291084  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.291256  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.291475  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.291493  485208 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134-m03 && echo "ha-220134-m03" | sudo tee /etc/hostname
	I0812 12:15:09.418898  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134-m03
	
	I0812 12:15:09.418933  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.422111  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.422527  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.422558  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.422768  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.422987  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.423189  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.423343  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.423523  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.423716  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.423733  485208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:15:09.543765  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:15:09.543804  485208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:15:09.543822  485208 buildroot.go:174] setting up certificates
	I0812 12:15:09.543833  485208 provision.go:84] configureAuth start
	I0812 12:15:09.543846  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetMachineName
	I0812 12:15:09.544164  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:09.547578  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.548065  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.548097  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.548368  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.550642  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.550993  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.551016  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.551197  485208 provision.go:143] copyHostCerts
	I0812 12:15:09.551247  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:15:09.551302  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:15:09.551311  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:15:09.551379  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:15:09.551464  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:15:09.551481  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:15:09.551488  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:15:09.551514  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:15:09.551562  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:15:09.551578  485208 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:15:09.551585  485208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:15:09.551605  485208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:15:09.551664  485208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134-m03 san=[127.0.0.1 192.168.39.186 ha-220134-m03 localhost minikube]
	I0812 12:15:09.691269  485208 provision.go:177] copyRemoteCerts
	I0812 12:15:09.691330  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:15:09.691356  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.694292  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.694610  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.694644  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.694805  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.695006  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.695179  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.695319  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:09.779238  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:15:09.779324  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:15:09.806470  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:15:09.806562  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0812 12:15:09.833996  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:15:09.834076  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 12:15:09.861148  485208 provision.go:87] duration metric: took 317.299651ms to configureAuth
	I0812 12:15:09.861193  485208 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:15:09.861496  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:09.861609  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:09.864409  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.864927  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:09.864959  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:09.865158  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:09.865374  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.865604  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:09.865775  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:09.865984  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:09.866162  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:09.866177  485208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:15:10.141905  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:15:10.141948  485208 main.go:141] libmachine: Checking connection to Docker...
	I0812 12:15:10.141961  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetURL
	I0812 12:15:10.143339  485208 main.go:141] libmachine: (ha-220134-m03) DBG | Using libvirt version 6000000
	I0812 12:15:10.145583  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.146035  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.146072  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.146240  485208 main.go:141] libmachine: Docker is up and running!
	I0812 12:15:10.146253  485208 main.go:141] libmachine: Reticulating splines...
	I0812 12:15:10.146261  485208 client.go:171] duration metric: took 23.993783736s to LocalClient.Create
	I0812 12:15:10.146288  485208 start.go:167] duration metric: took 23.993850825s to libmachine.API.Create "ha-220134"
	I0812 12:15:10.146299  485208 start.go:293] postStartSetup for "ha-220134-m03" (driver="kvm2")
	I0812 12:15:10.146313  485208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:15:10.146328  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.146603  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:15:10.146623  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.148993  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.149438  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.149468  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.149645  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.149838  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.150034  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.150210  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.236302  485208 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:15:10.240755  485208 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:15:10.240788  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:15:10.240866  485208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:15:10.240937  485208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:15:10.240946  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:15:10.241026  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:15:10.251073  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:15:10.275608  485208 start.go:296] duration metric: took 129.289194ms for postStartSetup
	I0812 12:15:10.275664  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetConfigRaw
	I0812 12:15:10.276276  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:10.278912  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.279215  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.279241  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.279538  485208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:15:10.279739  485208 start.go:128] duration metric: took 24.147300324s to createHost
	I0812 12:15:10.279767  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.282242  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.282621  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.282650  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.282773  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.282972  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.283203  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.283338  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.283491  485208 main.go:141] libmachine: Using SSH client type: native
	I0812 12:15:10.283666  485208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I0812 12:15:10.283677  485208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:15:10.394572  485208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723464910.368454015
	
	I0812 12:15:10.394602  485208 fix.go:216] guest clock: 1723464910.368454015
	I0812 12:15:10.394612  485208 fix.go:229] Guest: 2024-08-12 12:15:10.368454015 +0000 UTC Remote: 2024-08-12 12:15:10.27975226 +0000 UTC m=+217.130327126 (delta=88.701755ms)
	I0812 12:15:10.394636  485208 fix.go:200] guest clock delta is within tolerance: 88.701755ms
	I0812 12:15:10.394644  485208 start.go:83] releasing machines lock for "ha-220134-m03", held for 24.262422311s
	I0812 12:15:10.394667  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.394980  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:10.398332  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.398786  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.398815  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.400828  485208 out.go:177] * Found network options:
	I0812 12:15:10.402285  485208 out.go:177]   - NO_PROXY=192.168.39.228,192.168.39.215
	W0812 12:15:10.403549  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 12:15:10.403572  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:15:10.403589  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404254  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404527  485208 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:15:10.404655  485208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:15:10.404698  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	W0812 12:15:10.404774  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	W0812 12:15:10.404807  485208 proxy.go:119] fail to check proxy env: Error ip not in block
	I0812 12:15:10.404884  485208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:15:10.404908  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:15:10.407557  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.407768  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408059  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.408082  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408402  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:10.408427  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:10.408436  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.408663  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.408729  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:15:10.408857  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:15:10.408887  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.409066  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:15:10.409074  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.409239  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:15:10.649138  485208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:15:10.656231  485208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:15:10.656313  485208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:15:10.673736  485208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 12:15:10.673761  485208 start.go:495] detecting cgroup driver to use...
	I0812 12:15:10.673825  485208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:15:10.691199  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:15:10.706610  485208 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:15:10.706682  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:15:10.721355  485208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:15:10.737340  485208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:15:10.867875  485208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:15:11.034902  485208 docker.go:233] disabling docker service ...
	I0812 12:15:11.034999  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:15:11.058103  485208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:15:11.074000  485208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:15:11.216608  485208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:15:11.342608  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:15:11.359897  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:15:11.380642  485208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:15:11.380708  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.391300  485208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:15:11.391378  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.403641  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.415329  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.426601  485208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:15:11.437779  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.449221  485208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.467114  485208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:15:11.478693  485208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:15:11.488264  485208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 12:15:11.488342  485208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 12:15:11.502327  485208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:15:11.513785  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:11.641677  485208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:15:11.791705  485208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:15:11.791792  485208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:15:11.796976  485208 start.go:563] Will wait 60s for crictl version
	I0812 12:15:11.797059  485208 ssh_runner.go:195] Run: which crictl
	I0812 12:15:11.801905  485208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:15:11.849014  485208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:15:11.849135  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:15:11.881023  485208 ssh_runner.go:195] Run: crio --version
	I0812 12:15:11.915071  485208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:15:11.916784  485208 out.go:177]   - env NO_PROXY=192.168.39.228
	I0812 12:15:11.918466  485208 out.go:177]   - env NO_PROXY=192.168.39.228,192.168.39.215
	I0812 12:15:11.919870  485208 main.go:141] libmachine: (ha-220134-m03) Calling .GetIP
	I0812 12:15:11.922787  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:11.923224  485208 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:15:11.923256  485208 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:15:11.923524  485208 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:15:11.928325  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:15:11.942473  485208 mustload.go:65] Loading cluster: ha-220134
	I0812 12:15:11.942789  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:11.943051  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:11.943092  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:11.959670  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0812 12:15:11.960163  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:11.960708  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:11.960735  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:11.961123  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:11.961415  485208 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:15:11.963550  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:15:11.963855  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:11.963895  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:11.979646  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0812 12:15:11.980156  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:11.980701  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:11.980731  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:11.981028  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:11.981258  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:15:11.981458  485208 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.186
	I0812 12:15:11.981470  485208 certs.go:194] generating shared ca certs ...
	I0812 12:15:11.981495  485208 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:11.981642  485208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:15:11.981731  485208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:15:11.981746  485208 certs.go:256] generating profile certs ...
	I0812 12:15:11.981855  485208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:15:11.981894  485208 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a
	I0812 12:15:11.981912  485208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.186 192.168.39.254]
	I0812 12:15:12.248323  485208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a ...
	I0812 12:15:12.248383  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a: {Name:mkb3073f2fe8aabdbf88fa505342e41968793922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:12.248639  485208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a ...
	I0812 12:15:12.248663  485208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a: {Name:mkd338db6afdce959177496d1622a16e570568c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:15:12.248814  485208 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.eca6af3a -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:15:12.248993  485208 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.eca6af3a -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:15:12.249224  485208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:15:12.249252  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:15:12.249276  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:15:12.249295  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:15:12.249310  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:15:12.249326  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:15:12.249341  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:15:12.249356  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:15:12.249371  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:15:12.249439  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:15:12.249476  485208 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:15:12.249487  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:15:12.249514  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:15:12.249539  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:15:12.249564  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:15:12.249607  485208 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:15:12.249636  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.249654  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.249669  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.249712  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:15:12.252718  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:12.253161  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:15:12.253195  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:12.253310  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:15:12.253538  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:15:12.253733  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:15:12.253905  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:15:12.325509  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0812 12:15:12.331478  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0812 12:15:12.346358  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0812 12:15:12.351843  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0812 12:15:12.365038  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0812 12:15:12.370394  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0812 12:15:12.382010  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0812 12:15:12.387090  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0812 12:15:12.410634  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0812 12:15:12.415394  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0812 12:15:12.427945  485208 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0812 12:15:12.434870  485208 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0812 12:15:12.448851  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:15:12.475774  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:15:12.501005  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:15:12.527382  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:15:12.553839  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0812 12:15:12.580109  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:15:12.607717  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:15:12.637578  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:15:12.665723  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:15:12.692019  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:15:12.718643  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:15:12.744580  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0812 12:15:12.763993  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0812 12:15:12.782714  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0812 12:15:12.801248  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0812 12:15:12.820166  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0812 12:15:12.840028  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0812 12:15:12.859427  485208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0812 12:15:12.880244  485208 ssh_runner.go:195] Run: openssl version
	I0812 12:15:12.886878  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:15:12.899584  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.904818  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.904897  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:15:12.911261  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:15:12.926317  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:15:12.938933  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.943838  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.943920  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:15:12.951418  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:15:12.963787  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:15:12.975929  485208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.980630  485208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.980709  485208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:15:12.986747  485208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:15:12.999324  485208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:15:13.003797  485208 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 12:15:13.003869  485208 kubeadm.go:934] updating node {m03 192.168.39.186 8443 v1.30.3 crio true true} ...
	I0812 12:15:13.003968  485208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:15:13.003997  485208 kube-vip.go:115] generating kube-vip config ...
	I0812 12:15:13.004040  485208 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:15:13.023415  485208 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:15:13.023501  485208 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:15:13.023589  485208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:15:13.035641  485208 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0812 12:15:13.035737  485208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0812 12:15:13.046627  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0812 12:15:13.046655  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:15:13.046671  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0812 12:15:13.046729  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:15:13.046779  485208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0812 12:15:13.046732  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0812 12:15:13.046814  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:15:13.046962  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0812 12:15:13.068415  485208 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:15:13.068480  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0812 12:15:13.068516  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0812 12:15:13.068547  485208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0812 12:15:13.068548  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0812 12:15:13.068576  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0812 12:15:13.107318  485208 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0812 12:15:13.107372  485208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0812 12:15:14.114893  485208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0812 12:15:14.124892  485208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0812 12:15:14.142699  485208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:15:14.161029  485208 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:15:14.178890  485208 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:15:14.183190  485208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 12:15:14.196092  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:14.328714  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:15:14.355760  485208 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:15:14.356204  485208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:15:14.356262  485208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:15:14.375497  485208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0812 12:15:14.375963  485208 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:15:14.376483  485208 main.go:141] libmachine: Using API Version  1
	I0812 12:15:14.376509  485208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:15:14.376927  485208 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:15:14.377194  485208 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:15:14.377395  485208 start.go:317] joinCluster: &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:15:14.377584  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0812 12:15:14.377630  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:15:14.380463  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:14.380977  485208 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:15:14.381012  485208 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:15:14.381206  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:15:14.381389  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:15:14.381565  485208 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:15:14.381745  485208 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:15:14.542735  485208 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:15:14.542788  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mqn6sp.73kz8b8xaiyk1wfd --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I0812 12:15:38.133587  485208 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mqn6sp.73kz8b8xaiyk1wfd --discovery-token-ca-cert-hash sha256:4a4990dadfd9153c5d0742ac7a1882f5396a5ab8b82ccfa8c6411cf1ab517f0f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220134-m03 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (23.590763313s)
	I0812 12:15:38.133628  485208 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0812 12:15:38.739268  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220134-m03 minikube.k8s.io/updated_at=2024_08_12T12_15_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5 minikube.k8s.io/name=ha-220134 minikube.k8s.io/primary=false
	I0812 12:15:38.870523  485208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220134-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0812 12:15:38.983247  485208 start.go:319] duration metric: took 24.605848322s to joinCluster
	I0812 12:15:38.983347  485208 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 12:15:38.983739  485208 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:15:38.984770  485208 out.go:177] * Verifying Kubernetes components...
	I0812 12:15:38.986098  485208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:15:39.253258  485208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:15:39.316089  485208 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:15:39.316442  485208 kapi.go:59] client config for ha-220134: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0812 12:15:39.316559  485208 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.228:8443
	I0812 12:15:39.316857  485208 node_ready.go:35] waiting up to 6m0s for node "ha-220134-m03" to be "Ready" ...
	I0812 12:15:39.316960  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:39.316974  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:39.316986  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:39.316995  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:39.342249  485208 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0812 12:15:39.817647  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:39.817675  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:39.817689  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:39.817692  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:39.821204  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:40.317902  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:40.317935  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:40.317947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:40.317953  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:40.338792  485208 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0812 12:15:40.817809  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:40.817837  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:40.817850  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:40.817855  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:40.824992  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:41.317426  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:41.317450  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:41.317459  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:41.317463  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:41.320763  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:41.321604  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:41.817471  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:41.817500  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:41.817512  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:41.817516  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:41.821210  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:42.317913  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:42.317936  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:42.317943  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:42.317947  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:42.324448  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:15:42.817303  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:42.817330  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:42.817340  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:42.817345  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:42.821361  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:43.317869  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:43.317901  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:43.317912  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:43.317920  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:43.321819  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:43.322582  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:43.817281  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:43.817305  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:43.817313  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:43.817317  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:43.821128  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:44.317960  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:44.317993  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:44.318005  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:44.318010  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:44.321622  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:44.817248  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:44.817274  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:44.817285  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:44.817291  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:44.821427  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:45.317125  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:45.317154  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:45.317164  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:45.317172  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:45.334629  485208 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0812 12:15:45.335223  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:45.817282  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:45.817310  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:45.817321  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:45.817326  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:45.821430  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:46.317976  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:46.318010  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:46.318024  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:46.318029  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:46.321889  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:46.817894  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:46.817923  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:46.817940  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:46.817946  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:46.821782  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.317153  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:47.317179  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:47.317191  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:47.317196  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:47.320769  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.817771  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:47.817797  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:47.817805  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:47.817809  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:47.821828  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:47.822695  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:48.318009  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:48.318035  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:48.318045  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:48.318057  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:48.321642  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:48.817283  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:48.817307  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:48.817318  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:48.817323  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:48.821163  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:49.317247  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:49.317273  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:49.317282  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:49.317287  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:49.320924  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:49.817610  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:49.817636  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:49.817646  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:49.817652  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:49.821736  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:49.822866  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:50.317833  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:50.317866  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:50.317878  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:50.317951  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:50.322169  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:50.817186  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:50.817212  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:50.817221  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:50.817225  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:50.821455  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:51.317855  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:51.317884  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:51.317894  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:51.317900  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:51.321625  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:51.818108  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:51.818148  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:51.818163  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:51.818171  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:51.822034  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:52.317152  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:52.317178  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:52.317187  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:52.317192  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:52.321179  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:52.321831  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:52.817189  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:52.817214  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:52.817223  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:52.817226  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:52.821188  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:53.317803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:53.317824  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:53.317833  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:53.317836  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:53.322305  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:53.818064  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:53.818088  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:53.818097  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:53.818101  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:53.825556  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:54.317918  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:54.317949  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:54.317963  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:54.317968  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:54.321283  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:54.322098  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:54.817992  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:54.818017  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:54.818025  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:54.818030  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:54.822041  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:55.317942  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:55.317965  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:55.317974  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:55.317979  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:55.321883  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:55.817294  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:55.817321  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:55.817332  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:55.817339  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:55.821366  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:56.318102  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:56.318126  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:56.318135  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:56.318139  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:56.321908  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:56.322456  485208 node_ready.go:53] node "ha-220134-m03" has status "Ready":"False"
	I0812 12:15:56.817371  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:56.817395  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:56.817404  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:56.817408  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:56.821250  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.317301  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:57.317332  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.317341  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.317345  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.320868  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.817803  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:57.817830  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.817842  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.817848  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.828061  485208 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0812 12:15:57.828760  485208 node_ready.go:49] node "ha-220134-m03" has status "Ready":"True"
	I0812 12:15:57.828796  485208 node_ready.go:38] duration metric: took 18.511915198s for node "ha-220134-m03" to be "Ready" ...
	I0812 12:15:57.828809  485208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:15:57.828904  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:15:57.828912  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.828922  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.828931  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.835389  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:15:57.844344  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.844483  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mtqtk
	I0812 12:15:57.844496  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.844507  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.844521  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.848032  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.848780  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.848802  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.848813  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.848820  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.856504  485208 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0812 12:15:57.857209  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.857235  485208 pod_ready.go:81] duration metric: took 12.849573ms for pod "coredns-7db6d8ff4d-mtqtk" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.857247  485208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.857333  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-t8pg7
	I0812 12:15:57.857344  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.857354  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.857363  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.860657  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.861454  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.861474  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.861485  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.861490  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.864480  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.865233  485208 pod_ready.go:92] pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.865259  485208 pod_ready.go:81] duration metric: took 8.001039ms for pod "coredns-7db6d8ff4d-t8pg7" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.865273  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.865347  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134
	I0812 12:15:57.865359  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.865369  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.865373  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.869318  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:57.870039  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:57.870059  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.870070  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.870077  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.872913  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.874165  485208 pod_ready.go:92] pod "etcd-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.874184  485208 pod_ready.go:81] duration metric: took 8.905178ms for pod "etcd-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.874193  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.874248  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m02
	I0812 12:15:57.874255  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.874262  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.874270  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.877018  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.877677  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:57.877697  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:57.877708  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:57.877713  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:57.880246  485208 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0812 12:15:57.880801  485208 pod_ready.go:92] pod "etcd-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:57.880823  485208 pod_ready.go:81] duration metric: took 6.623619ms for pod "etcd-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:57.880832  485208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.018082  485208 request.go:629] Waited for 137.1761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m03
	I0812 12:15:58.018175  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220134-m03
	I0812 12:15:58.018183  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.018191  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.018195  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.021679  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.218453  485208 request.go:629] Waited for 196.153729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:58.218517  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:58.218523  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.218534  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.218538  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.222280  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.222815  485208 pod_ready.go:92] pod "etcd-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:58.222838  485208 pod_ready.go:81] duration metric: took 341.999438ms for pod "etcd-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.222863  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.418603  485208 request.go:629] Waited for 195.632879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:15:58.418696  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134
	I0812 12:15:58.418706  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.418718  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.418727  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.422992  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:58.618130  485208 request.go:629] Waited for 194.402829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:58.618210  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:58.618218  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.618233  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.618251  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.622051  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:58.622714  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:58.622749  485208 pod_ready.go:81] duration metric: took 399.874745ms for pod "kube-apiserver-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.622763  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:58.818786  485208 request.go:629] Waited for 195.861954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:15:58.818855  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m02
	I0812 12:15:58.818864  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:58.818879  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:58.818888  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:58.822493  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.018527  485208 request.go:629] Waited for 195.364582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:59.018607  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:15:59.018612  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.018620  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.018624  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.022380  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.022926  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.022948  485208 pod_ready.go:81] duration metric: took 400.173977ms for pod "kube-apiserver-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.022959  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.218119  485208 request.go:629] Waited for 195.069484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m03
	I0812 12:15:59.218229  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220134-m03
	I0812 12:15:59.218249  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.218258  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.218262  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.222067  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.418056  485208 request.go:629] Waited for 195.153683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:59.418123  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:15:59.418128  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.418136  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.418142  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.421814  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.422444  485208 pod_ready.go:92] pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.422462  485208 pod_ready.go:81] duration metric: took 399.4962ms for pod "kube-apiserver-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.422473  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.618585  485208 request.go:629] Waited for 196.031623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:15:59.618684  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134
	I0812 12:15:59.618691  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.618703  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.618710  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.622949  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:15:59.818114  485208 request.go:629] Waited for 194.409087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:59.818181  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:15:59.818192  485208 round_trippers.go:469] Request Headers:
	I0812 12:15:59.818201  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:15:59.818204  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:15:59.821893  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:15:59.822434  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:15:59.822459  485208 pod_ready.go:81] duration metric: took 399.976836ms for pod "kube-controller-manager-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:15:59.822479  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.017940  485208 request.go:629] Waited for 195.346209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:16:00.018029  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m02
	I0812 12:16:00.018038  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.018046  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.018053  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.022105  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:00.218183  485208 request.go:629] Waited for 195.418276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:00.218257  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:00.218263  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.218270  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.218274  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.225132  485208 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0812 12:16:00.225623  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:00.225646  485208 pod_ready.go:81] duration metric: took 403.159407ms for pod "kube-controller-manager-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.225657  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.418740  485208 request.go:629] Waited for 193.005776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m03
	I0812 12:16:00.418835  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220134-m03
	I0812 12:16:00.418843  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.418854  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.418862  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.424349  485208 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0812 12:16:00.618588  485208 request.go:629] Waited for 193.405723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:00.618677  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:00.618685  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.618696  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.618702  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.622672  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:00.623302  485208 pod_ready.go:92] pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:00.623337  485208 pod_ready.go:81] duration metric: took 397.673607ms for pod "kube-controller-manager-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.623348  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:00.818404  485208 request.go:629] Waited for 194.974777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:16:00.818491  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bs72f
	I0812 12:16:00.818497  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:00.818505  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:00.818511  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:00.822075  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.018322  485208 request.go:629] Waited for 195.38453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:01.018456  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:01.018468  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.018478  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.018486  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.023391  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:01.024001  485208 pod_ready.go:92] pod "kube-proxy-bs72f" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.024028  485208 pod_ready.go:81] duration metric: took 400.674392ms for pod "kube-proxy-bs72f" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.024039  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-frf96" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.218585  485208 request.go:629] Waited for 194.46965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frf96
	I0812 12:16:01.218658  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-frf96
	I0812 12:16:01.218664  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.218674  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.218682  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.222376  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.418194  485208 request.go:629] Waited for 193.424592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:01.418267  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:01.418272  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.418281  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.418285  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.422466  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:01.423030  485208 pod_ready.go:92] pod "kube-proxy-frf96" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.423056  485208 pod_ready.go:81] duration metric: took 399.011331ms for pod "kube-proxy-frf96" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.423074  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.618143  485208 request.go:629] Waited for 194.985445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:16:01.618222  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcgh8
	I0812 12:16:01.618228  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.618239  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.618243  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.622216  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.818656  485208 request.go:629] Waited for 195.548171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:01.818725  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:01.818731  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:01.818738  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:01.818741  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:01.822308  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:01.823189  485208 pod_ready.go:92] pod "kube-proxy-zcgh8" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:01.823218  485208 pod_ready.go:81] duration metric: took 400.132968ms for pod "kube-proxy-zcgh8" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:01.823234  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.018381  485208 request.go:629] Waited for 195.050301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:16:02.018474  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134
	I0812 12:16:02.018482  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.018503  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.018527  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.022296  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:02.218293  485208 request.go:629] Waited for 195.40703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:02.218390  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134
	I0812 12:16:02.218395  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.218406  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.218419  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.222345  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:02.223079  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:02.223106  485208 pod_ready.go:81] duration metric: took 399.864213ms for pod "kube-scheduler-ha-220134" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.223120  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.417959  485208 request.go:629] Waited for 194.725233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:16:02.418040  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m02
	I0812 12:16:02.418047  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.418058  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.418067  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.422438  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:02.618516  485208 request.go:629] Waited for 195.439125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:02.618611  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m02
	I0812 12:16:02.618619  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.618629  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.618636  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.622890  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:02.623630  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:02.623657  485208 pod_ready.go:81] duration metric: took 400.529786ms for pod "kube-scheduler-ha-220134-m02" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.623667  485208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:02.818619  485208 request.go:629] Waited for 194.850023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m03
	I0812 12:16:02.818691  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220134-m03
	I0812 12:16:02.818697  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:02.818707  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:02.818721  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:02.822164  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.018730  485208 request.go:629] Waited for 195.397233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:03.018813  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes/ha-220134-m03
	I0812 12:16:03.018822  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.018835  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.018852  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.022262  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.022736  485208 pod_ready.go:92] pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace has status "Ready":"True"
	I0812 12:16:03.022755  485208 pod_ready.go:81] duration metric: took 399.081346ms for pod "kube-scheduler-ha-220134-m03" in "kube-system" namespace to be "Ready" ...
	I0812 12:16:03.022766  485208 pod_ready.go:38] duration metric: took 5.193943384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 12:16:03.022782  485208 api_server.go:52] waiting for apiserver process to appear ...
	I0812 12:16:03.022838  485208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:16:03.039206  485208 api_server.go:72] duration metric: took 24.055813006s to wait for apiserver process to appear ...
	I0812 12:16:03.039235  485208 api_server.go:88] waiting for apiserver healthz status ...
	I0812 12:16:03.039255  485208 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0812 12:16:03.044029  485208 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0812 12:16:03.044126  485208 round_trippers.go:463] GET https://192.168.39.228:8443/version
	I0812 12:16:03.044138  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.044149  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.044158  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.045192  485208 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0812 12:16:03.045275  485208 api_server.go:141] control plane version: v1.30.3
	I0812 12:16:03.045294  485208 api_server.go:131] duration metric: took 6.052725ms to wait for apiserver health ...
	I0812 12:16:03.045304  485208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 12:16:03.217860  485208 request.go:629] Waited for 172.441694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.217949  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.217960  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.217983  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.218012  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.226864  485208 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0812 12:16:03.234278  485208 system_pods.go:59] 24 kube-system pods found
	I0812 12:16:03.234312  485208 system_pods.go:61] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:16:03.234319  485208 system_pods.go:61] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:16:03.234324  485208 system_pods.go:61] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:16:03.234330  485208 system_pods.go:61] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:16:03.234334  485208 system_pods.go:61] "etcd-ha-220134-m03" [7e4b8706-73e3-42d0-a278-af5746ec8b1c] Running
	I0812 12:16:03.234338  485208 system_pods.go:61] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:16:03.234343  485208 system_pods.go:61] "kindnet-5rpgt" [31982666-9f03-4c8c-9af1-49b88de06452] Running
	I0812 12:16:03.234348  485208 system_pods.go:61] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:16:03.234352  485208 system_pods.go:61] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:16:03.234358  485208 system_pods.go:61] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:16:03.234362  485208 system_pods.go:61] "kube-apiserver-ha-220134-m03" [803dd422-e106-4e57-b70b-cef6cfb2f085] Running
	I0812 12:16:03.234367  485208 system_pods.go:61] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:16:03.234376  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:16:03.234382  485208 system_pods.go:61] "kube-controller-manager-ha-220134-m03" [20cc5801-d513-46d3-84c1-635ef86e0cc6] Running
	I0812 12:16:03.234390  485208 system_pods.go:61] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:16:03.234396  485208 system_pods.go:61] "kube-proxy-frf96" [e7a33b21-d4a2-4099-8b0c-e602993fd716] Running
	I0812 12:16:03.234402  485208 system_pods.go:61] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:16:03.234408  485208 system_pods.go:61] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:16:03.234413  485208 system_pods.go:61] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:16:03.234421  485208 system_pods.go:61] "kube-scheduler-ha-220134-m03" [eb11cfca-d302-4c98-8d7c-ba0689b8f812] Running
	I0812 12:16:03.234427  485208 system_pods.go:61] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:16:03.234433  485208 system_pods.go:61] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:16:03.234439  485208 system_pods.go:61] "kube-vip-ha-220134-m03" [d4064203-c571-43ac-a0f4-8cb1082d3e05] Running
	I0812 12:16:03.234448  485208 system_pods.go:61] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:16:03.234458  485208 system_pods.go:74] duration metric: took 189.142008ms to wait for pod list to return data ...
	I0812 12:16:03.234471  485208 default_sa.go:34] waiting for default service account to be created ...
	I0812 12:16:03.418556  485208 request.go:629] Waited for 183.983595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:16:03.418632  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/default/serviceaccounts
	I0812 12:16:03.418637  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.418645  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.418651  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.423472  485208 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0812 12:16:03.423614  485208 default_sa.go:45] found service account: "default"
	I0812 12:16:03.423636  485208 default_sa.go:55] duration metric: took 189.156291ms for default service account to be created ...
	I0812 12:16:03.423648  485208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 12:16:03.618148  485208 request.go:629] Waited for 194.414281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.618238  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/namespaces/kube-system/pods
	I0812 12:16:03.618243  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.618251  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.618256  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.627772  485208 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0812 12:16:03.635180  485208 system_pods.go:86] 24 kube-system pods found
	I0812 12:16:03.635218  485208 system_pods.go:89] "coredns-7db6d8ff4d-mtqtk" [be769ca5-c3cd-4682-96f3-6244b5e1cadb] Running
	I0812 12:16:03.635225  485208 system_pods.go:89] "coredns-7db6d8ff4d-t8pg7" [219c5cf3-19e1-40fc-98c8-9c2d2a800b7b] Running
	I0812 12:16:03.635229  485208 system_pods.go:89] "etcd-ha-220134" [c5f18146-c2e2-4fff-9c0d-596ae90fa52c] Running
	I0812 12:16:03.635233  485208 system_pods.go:89] "etcd-ha-220134-m02" [c47fb727-a9e8-4fc0-b214-4c207e3b6ca5] Running
	I0812 12:16:03.635237  485208 system_pods.go:89] "etcd-ha-220134-m03" [7e4b8706-73e3-42d0-a278-af5746ec8b1c] Running
	I0812 12:16:03.635241  485208 system_pods.go:89] "kindnet-52flt" [33960bd4-6e69-4d0e-85c4-e360440e20ca] Running
	I0812 12:16:03.635244  485208 system_pods.go:89] "kindnet-5rpgt" [31982666-9f03-4c8c-9af1-49b88de06452] Running
	I0812 12:16:03.635248  485208 system_pods.go:89] "kindnet-mh4sv" [cd619441-cf92-4026-98ef-0f50d4bfc470] Running
	I0812 12:16:03.635252  485208 system_pods.go:89] "kube-apiserver-ha-220134" [4a4c795c-537c-4c8f-97e9-dbe5aa5cf833] Running
	I0812 12:16:03.635256  485208 system_pods.go:89] "kube-apiserver-ha-220134-m02" [bbb2ea59-2be6-4169-9cb1-30a0156576f3] Running
	I0812 12:16:03.635260  485208 system_pods.go:89] "kube-apiserver-ha-220134-m03" [803dd422-e106-4e57-b70b-cef6cfb2f085] Running
	I0812 12:16:03.635263  485208 system_pods.go:89] "kube-controller-manager-ha-220134" [2b2cf67b-146b-4b3e-a9d4-9f9db19a1e1a] Running
	I0812 12:16:03.635268  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m02" [3e1ffbcc-5420-4fec-ae1b-b847b9abbbe3] Running
	I0812 12:16:03.635272  485208 system_pods.go:89] "kube-controller-manager-ha-220134-m03" [20cc5801-d513-46d3-84c1-635ef86e0cc6] Running
	I0812 12:16:03.635276  485208 system_pods.go:89] "kube-proxy-bs72f" [5327fab0-4436-4ddd-8114-66f4f1f66628] Running
	I0812 12:16:03.635279  485208 system_pods.go:89] "kube-proxy-frf96" [e7a33b21-d4a2-4099-8b0c-e602993fd716] Running
	I0812 12:16:03.635283  485208 system_pods.go:89] "kube-proxy-zcgh8" [a39c5f53-1764-43b6-a140-2fec3819210d] Running
	I0812 12:16:03.635286  485208 system_pods.go:89] "kube-scheduler-ha-220134" [0dfbb024-200a-4206-96b7-cf0479104cea] Running
	I0812 12:16:03.635290  485208 system_pods.go:89] "kube-scheduler-ha-220134-m02" [49eb61bd-caf9-4248-a2b5-9520d397faa8] Running
	I0812 12:16:03.635293  485208 system_pods.go:89] "kube-scheduler-ha-220134-m03" [eb11cfca-d302-4c98-8d7c-ba0689b8f812] Running
	I0812 12:16:03.635296  485208 system_pods.go:89] "kube-vip-ha-220134" [393b98a5-fa45-458d-9d14-b74f09c9384a] Running
	I0812 12:16:03.635300  485208 system_pods.go:89] "kube-vip-ha-220134-m02" [6e3d6563-cf8f-4b00-9595-aa0900b9b978] Running
	I0812 12:16:03.635303  485208 system_pods.go:89] "kube-vip-ha-220134-m03" [d4064203-c571-43ac-a0f4-8cb1082d3e05] Running
	I0812 12:16:03.635306  485208 system_pods.go:89] "storage-provisioner" [bca65bc5-3ba1-44be-8606-f8235cf9b3d0] Running
	I0812 12:16:03.635314  485208 system_pods.go:126] duration metric: took 211.659957ms to wait for k8s-apps to be running ...
	I0812 12:16:03.635325  485208 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 12:16:03.635375  485208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:16:03.652340  485208 system_svc.go:56] duration metric: took 17.002405ms WaitForService to wait for kubelet
	I0812 12:16:03.652383  485208 kubeadm.go:582] duration metric: took 24.668994669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:16:03.652411  485208 node_conditions.go:102] verifying NodePressure condition ...
	I0812 12:16:03.817784  485208 request.go:629] Waited for 165.280343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.228:8443/api/v1/nodes
	I0812 12:16:03.817924  485208 round_trippers.go:463] GET https://192.168.39.228:8443/api/v1/nodes
	I0812 12:16:03.817938  485208 round_trippers.go:469] Request Headers:
	I0812 12:16:03.817947  485208 round_trippers.go:473]     Accept: application/json, */*
	I0812 12:16:03.817951  485208 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0812 12:16:03.821918  485208 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0812 12:16:03.823150  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823177  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823191  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823196  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823201  485208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 12:16:03.823206  485208 node_conditions.go:123] node cpu capacity is 2
	I0812 12:16:03.823211  485208 node_conditions.go:105] duration metric: took 170.794009ms to run NodePressure ...
	I0812 12:16:03.823232  485208 start.go:241] waiting for startup goroutines ...
	I0812 12:16:03.823263  485208 start.go:255] writing updated cluster config ...
	I0812 12:16:03.823610  485208 ssh_runner.go:195] Run: rm -f paused
	I0812 12:16:03.881159  485208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0812 12:16:03.884497  485208 out.go:177] * Done! kubectl is now configured to use "ha-220134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.472552643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465243472522855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a80fe156-c982-4adf-b74d-eec28f6d6de8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.473418185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56a94867-8132-4f8a-a8a5-b96dea892890 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.473501099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56a94867-8132-4f8a-a8a5-b96dea892890 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.474132890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56a94867-8132-4f8a-a8a5-b96dea892890 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.518395769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5a20f55-5369-46b9-865b-1f867faaf65f name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.518652528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5a20f55-5369-46b9-865b-1f867faaf65f name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.520582728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acf603ef-3637-41d5-ad10-97091144c623 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.521236390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465243521204398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acf603ef-3637-41d5-ad10-97091144c623 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.522551887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f615930-f3c7-44a3-b060-a54a9789c841 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.522666041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f615930-f3c7-44a3-b060-a54a9789c841 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.523164870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f615930-f3c7-44a3-b060-a54a9789c841 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.572691163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dcb1112-0454-47c4-a8de-3b6a4a1ac875 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.572822740Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dcb1112-0454-47c4-a8de-3b6a4a1ac875 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.574497873Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6bc40c1-7d1c-441b-a84f-d68199a85c78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.574985614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465243574958961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6bc40c1-7d1c-441b-a84f-d68199a85c78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.575865326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2943c12a-0a97-4d8e-9b91-30dca939ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.575939104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2943c12a-0a97-4d8e-9b91-30dca939ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.576202519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2943c12a-0a97-4d8e-9b91-30dca939ddc1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.617742299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86701303-e836-41a9-b6d0-e0195aa72718 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.617858066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86701303-e836-41a9-b6d0-e0195aa72718 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.620351701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=180c91e4-a91e-4b58-a26b-4ccde3a1cf95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.621492043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465243621456930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=180c91e4-a91e-4b58-a26b-4ccde3a1cf95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.622132253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36acc107-229a-42e2-9169-9d588877d8ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.622183795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36acc107-229a-42e2-9169-9d588877d8ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:20:43 ha-220134 crio[680]: time="2024-08-12 12:20:43.622485460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723464968121017676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763046838090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d772d606436a45273d942f376c75da2c6561d370230e9783a2e6aee5f53b8b95,PodSandboxId:3a4517d1fb24cfc897bb15e75951a75c7babcd6ca6644a73b224d9d81a847a5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723464763004064198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723464763003601414,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3c
d-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723464750926088287,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723464746
717591487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8,PodSandboxId:38b5e173b2a5b69d5b12b949ecd5adc180d91fec8c3b4778301fe76a19eaba74,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172346472937
6185962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a2fb7f75425c6aec875451722b8037,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723464726177802197,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4,PodSandboxId:dfe26ae1cd45795f75a1ac6c6797aba7f89213005cadc7ecafea4fee233c205f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723464726180616816,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9,PodSandboxId:142675cc5defdac9f674024ab3c1ff44719cef0372133c1681721883d052fa3c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723464726147863342,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723464726065544985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36acc107-229a-42e2-9169-9d588877d8ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd5e5f2f3e8c9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   d0ae8920356aa       busybox-fc5497c4f-qh8vv
	58c1b0454a4f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   2c5c191b44764       coredns-7db6d8ff4d-t8pg7
	d772d606436a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   3a4517d1fb24c       storage-provisioner
	d6bc464a808be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   c1f343a193477       coredns-7db6d8ff4d-mtqtk
	ec1c98b0147f2       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago       Running             kindnet-cni               0                   6bb5cf25bace5       kindnet-mh4sv
	43dd48710573d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   d3f2e966dc4ec       kube-proxy-zcgh8
	4c2431108a96b       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   38b5e173b2a5b       kube-vip-ha-220134
	61f57a70138eb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   dfe26ae1cd457       kube-apiserver-ha-220134
	3b386f478bcd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   e773728876a09       etcd-ha-220134
	d80fece0b2b4c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   142675cc5defd       kube-controller-manager-ha-220134
	e302617a6e799       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   36c1552f9acff       kube-scheduler-ha-220134
	
	
	==> coredns [58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf] <==
	[INFO] 10.244.2.2:51341 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00065331s
	[INFO] 10.244.2.2:60084 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012982474s
	[INFO] 10.244.1.2:47114 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001833451s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000675959s
	[INFO] 10.244.0.4:53598 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121522s
	[INFO] 10.244.0.4:43198 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083609s
	[INFO] 10.244.2.2:44558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149393s
	[INFO] 10.244.2.2:54267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000289357s
	[INFO] 10.244.2.2:36401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192313s
	[INFO] 10.244.2.2:47805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012737375s
	[INFO] 10.244.2.2:52660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213917s
	[INFO] 10.244.2.2:56721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019118s
	[INFO] 10.244.1.2:46713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180271s
	[INFO] 10.244.1.2:45630 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117989s
	[INFO] 10.244.1.2:36911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001707s
	[INFO] 10.244.2.2:55073 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.2.2:37969 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010618s
	[INFO] 10.244.1.2:57685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225366s
	[INFO] 10.244.1.2:52755 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103176s
	[INFO] 10.244.0.4:52936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131913s
	[INFO] 10.244.0.4:57415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055098s
	[INFO] 10.244.2.2:48523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363461s
	[INFO] 10.244.1.2:41861 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150101s
	[INFO] 10.244.0.4:60137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147895s
	[INFO] 10.244.0.4:46681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070169s
	
	
	==> coredns [d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691] <==
	[INFO] 10.244.1.2:59335 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001407399s
	[INFO] 10.244.1.2:36634 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210279s
	[INFO] 10.244.1.2:55843 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126918s
	[INFO] 10.244.0.4:55735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122773s
	[INFO] 10.244.0.4:45449 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001732135s
	[INFO] 10.244.0.4:52443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019087s
	[INFO] 10.244.0.4:57191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001115s
	[INFO] 10.244.0.4:36774 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249129s
	[INFO] 10.244.0.4:36176 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018293s
	[INFO] 10.244.0.4:52138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073249s
	[INFO] 10.244.0.4:52765 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054999s
	[INFO] 10.244.2.2:35368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110859s
	[INFO] 10.244.2.2:55727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119256s
	[INFO] 10.244.1.2:45598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120462s
	[INFO] 10.244.1.2:57257 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000297797s
	[INFO] 10.244.0.4:48236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152091s
	[INFO] 10.244.0.4:40466 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098727s
	[INFO] 10.244.2.2:37067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001712s
	[INFO] 10.244.2.2:54242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014178s
	[INFO] 10.244.2.2:41816 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019482s
	[INFO] 10.244.1.2:42291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335455s
	[INFO] 10.244.1.2:33492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078001s
	[INFO] 10.244.1.2:52208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005886s
	[INFO] 10.244.0.4:55618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005463s
	[INFO] 10.244.0.4:59573 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079101s
	
	
	==> describe nodes <==
	Name:               ha-220134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:12:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:16:18 +0000   Mon, 12 Aug 2024 12:12:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-220134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b36c448dca9a4512802dabd6b631307b
	  System UUID:                b36c448d-ca9a-4512-802d-abd6b631307b
	  Boot ID:                    b1858840-6bc1-4ad6-872f-13825f26f2e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qh8vv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 coredns-7db6d8ff4d-mtqtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m18s
	  kube-system                 coredns-7db6d8ff4d-t8pg7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m18s
	  kube-system                 etcd-ha-220134                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m32s
	  kube-system                 kindnet-mh4sv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m19s
	  kube-system                 kube-apiserver-ha-220134             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-controller-manager-ha-220134    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-proxy-zcgh8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kube-system                 kube-scheduler-ha-220134             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-vip-ha-220134                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m16s  kube-proxy       
	  Normal  Starting                 8m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m31s  kubelet          Node ha-220134 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s  kubelet          Node ha-220134 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s  kubelet          Node ha-220134 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m19s  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal  NodeReady                8m1s   kubelet          Node ha-220134 status is now: NodeReady
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal  RegisteredNode           4m50s  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	
	
	Name:               ha-220134-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:14:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:17:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 12:16:22 +0000   Mon, 12 Aug 2024 12:18:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-220134-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ab5f23e5e3d4308ad21378e16e05f36
	  System UUID:                5ab5f23e-5e3d-4308-ad21-378e16e05f36
	  Boot ID:                    8780b076-0f04-484a-8659-00b31b1b3882
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9hhl4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-220134-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m23s
	  kube-system                 kindnet-52flt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m25s
	  kube-system                 kube-apiserver-ha-220134-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-ha-220134-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-bs72f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-scheduler-ha-220134-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-220134-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s (x7 over 6m25s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  NodeNotReady             2m41s                  node-controller  Node ha-220134-m02 status is now: NodeNotReady
	
	
	Name:               ha-220134-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_15_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:20:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:16:36 +0000   Mon, 12 Aug 2024 12:15:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-220134-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ec5658a50d452880d7dcb7c738e134
	  System UUID:                d4ec5658-a50d-4528-80d7-dcb7c738e134
	  Boot ID:                    0c28ba62-fd1f-4822-8fc9-5eb9067b87cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-82gr9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-220134-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-5rpgt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-ha-220134-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-220134-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-frf96                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-ha-220134-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-220134-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-220134-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                 node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal  RegisteredNode           4m51s                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	
	
	Name:               ha-220134-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:16:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:20:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:16:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:17:14 +0000   Mon, 12 Aug 2024 12:17:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-220134-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 faa5c8215a114c109397b8051f5bfb12
	  System UUID:                faa5c821-5a11-4c10-9397-b8051f5bfb12
	  Boot ID:                    c4c180b8-7edc-46ca-84c9-9555186bc2c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zcp4c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-proxy-s6pvf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x2 over 4m1s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x2 over 4m1s)  kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x2 over 4m1s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal  NodeReady                3m39s                kubelet          Node ha-220134-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug12 12:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051001] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039995] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.777370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.613356] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.630122] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.287271] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057678] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.199862] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.121638] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281974] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.332937] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060566] kauditd_printk_skb: 130 callbacks suppressed
	[Aug12 12:12] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.913038] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.066004] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.086767] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.012156] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.877478] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 12:14] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99] <==
	{"level":"warn","ts":"2024-08-12T12:20:43.936191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.936972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.95501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.963959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.972133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.977608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.981494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.989641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:43.999078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.008841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.013981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.017782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.033761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.036643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.037347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.058451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.078489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.083619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.090586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.097569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.113656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.124106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.132112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.136381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:20:44.189191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:20:44 up 9 min,  0 users,  load average: 0.20, 0.25, 0.15
	Linux ha-220134 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa] <==
	I0812 12:20:12.005522       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:20:22.006803       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:20:22.006859       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:20:22.007012       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:20:22.007038       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:20:22.007103       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:20:22.007126       1 main.go:299] handling current node
	I0812 12:20:22.007137       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:20:22.007141       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:20:31.998703       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:20:31.998846       1 main.go:299] handling current node
	I0812 12:20:31.998905       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:20:31.998914       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:20:31.999243       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:20:31.999369       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:20:31.999479       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:20:31.999509       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:20:42.000444       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:20:42.000573       1 main.go:299] handling current node
	I0812 12:20:42.000603       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:20:42.000621       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:20:42.000822       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:20:42.000921       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:20:42.001112       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:20:42.001412       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4] <==
	I0812 12:12:11.213208       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0812 12:12:11.221113       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.228]
	I0812 12:12:11.230397       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:12:11.251937       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 12:12:11.294950       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 12:12:12.357598       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 12:12:12.384090       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0812 12:12:12.414815       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 12:12:24.757597       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0812 12:12:25.551999       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0812 12:16:09.943027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49948: use of closed network connection
	E0812 12:16:10.130561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49968: use of closed network connection
	E0812 12:16:10.342928       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49984: use of closed network connection
	E0812 12:16:10.573186       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49998: use of closed network connection
	E0812 12:16:10.768615       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50068: use of closed network connection
	E0812 12:16:10.974785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50094: use of closed network connection
	E0812 12:16:11.172734       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50098: use of closed network connection
	E0812 12:16:11.365638       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50118: use of closed network connection
	E0812 12:16:11.558080       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50136: use of closed network connection
	E0812 12:16:12.084482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50166: use of closed network connection
	E0812 12:16:12.266684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50178: use of closed network connection
	E0812 12:16:12.464018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50198: use of closed network connection
	E0812 12:16:12.654171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50212: use of closed network connection
	E0812 12:16:12.865763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50236: use of closed network connection
	W0812 12:17:41.239580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.228]
	
	
	==> kube-controller-manager [d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9] <==
	I0812 12:15:35.294189       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220134-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:15:39.667529       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m03"
	I0812 12:16:04.827718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.929208ms"
	I0812 12:16:04.876769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.984734ms"
	I0812 12:16:05.009630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.714804ms"
	I0812 12:16:05.244812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="234.340169ms"
	E0812 12:16:05.245013       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:16:05.245232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.766µs"
	I0812 12:16:05.257906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.748µs"
	I0812 12:16:05.278251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.901µs"
	I0812 12:16:05.359391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.964618ms"
	I0812 12:16:05.359493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.029µs"
	I0812 12:16:08.230464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.076642ms"
	I0812 12:16:08.230557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.729µs"
	I0812 12:16:09.315384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.540789ms"
	I0812 12:16:09.315719       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.661µs"
	I0812 12:16:09.457388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.119094ms"
	I0812 12:16:09.457533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.667µs"
	I0812 12:16:43.900847       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-220134-m04\" does not exist"
	I0812 12:16:43.937726       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220134-m04" podCIDRs=["10.244.3.0/24"]
	I0812 12:16:44.679091       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m04"
	I0812 12:17:05.169644       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220134-m04"
	I0812 12:18:03.476805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220134-m04"
	I0812 12:18:03.523864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.151557ms"
	I0812 12:18:03.524084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.67µs"
	
	
	==> kube-proxy [43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e] <==
	I0812 12:12:26.916066       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:12:26.937082       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	I0812 12:12:26.985828       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:12:26.985927       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:12:26.986005       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:12:26.989628       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:12:26.989976       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:12:26.990035       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:12:26.992043       1 config.go:192] "Starting service config controller"
	I0812 12:12:26.992418       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:12:26.992483       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:12:26.992502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:12:26.993790       1 config.go:319] "Starting node config controller"
	I0812 12:12:26.993840       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:12:27.092914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:12:27.093103       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:12:27.094604       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768] <==
	W0812 12:12:10.559188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:12:10.559311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:12:10.568739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 12:12:10.568844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0812 12:12:10.576109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 12:12:10.576370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:12:10.597752       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 12:12:10.597845       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 12:12:10.625590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:12:10.625685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:12:10.663130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:12:10.663175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0812 12:12:13.072576       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 12:16:43.986514       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-zcp4c\": pod kindnet-zcp4c is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-zcp4c" node="ha-220134-m04"
	E0812 12:16:43.988978       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 829781ac-6d1e-4b05-8980-64006094f191(kube-system/kindnet-zcp4c) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-zcp4c"
	E0812 12:16:43.989401       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-zcp4c\": pod kindnet-zcp4c is already assigned to node \"ha-220134-m04\"" pod="kube-system/kindnet-zcp4c"
	I0812 12:16:43.989622       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-zcp4c" node="ha-220134-m04"
	E0812 12:16:44.006124       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-s6pvf\": pod kube-proxy-s6pvf is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-s6pvf" node="ha-220134-m04"
	E0812 12:16:44.006923       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 107f24c1-a9a0-4eb3-99ce-a767ff974ea6(kube-system/kube-proxy-s6pvf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-s6pvf"
	E0812 12:16:44.007014       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-s6pvf\": pod kube-proxy-s6pvf is already assigned to node \"ha-220134-m04\"" pod="kube-system/kube-proxy-s6pvf"
	I0812 12:16:44.007090       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-s6pvf" node="ha-220134-m04"
	E0812 12:16:44.022580       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-txxjp\": pod kube-proxy-txxjp is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-txxjp" node="ha-220134-m04"
	E0812 12:16:44.022798       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 27ac376e-f61e-4abe-9d7d-1201161d7d1f(kube-system/kube-proxy-txxjp) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-txxjp"
	E0812 12:16:44.022882       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-txxjp\": pod kube-proxy-txxjp is already assigned to node \"ha-220134-m04\"" pod="kube-system/kube-proxy-txxjp"
	I0812 12:16:44.022990       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-txxjp" node="ha-220134-m04"
	
	
	==> kubelet <==
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:16:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:16:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:16:12 ha-220134 kubelet[1373]: E0812 12:16:12.865172    1373 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:57490->127.0.0.1:44369: read tcp 127.0.0.1:57490->127.0.0.1:44369: read: connection reset by peer
	Aug 12 12:17:12 ha-220134 kubelet[1373]: E0812 12:17:12.310771    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:17:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:17:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:18:12 ha-220134 kubelet[1373]: E0812 12:18:12.305804    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:18:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:18:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:19:12 ha-220134 kubelet[1373]: E0812 12:19:12.309461    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:19:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:19:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:20:12 ha-220134 kubelet[1373]: E0812 12:20:12.312118    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:20:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:20:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:20:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:20:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220134 -n ha-220134
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-220134 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-220134 -v=7 --alsologtostderr
E0812 12:21:12.301070  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-220134 -v=7 --alsologtostderr: exit status 82 (2m1.887334336s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220134-m04"  ...
	* Stopping node "ha-220134-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:20:45.684805  491706 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:20:45.685130  491706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:45.685144  491706 out.go:304] Setting ErrFile to fd 2...
	I0812 12:20:45.685149  491706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:20:45.685326  491706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:20:45.685581  491706 out.go:298] Setting JSON to false
	I0812 12:20:45.685677  491706 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:45.686066  491706 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:45.686153  491706 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:20:45.686331  491706 mustload.go:65] Loading cluster: ha-220134
	I0812 12:20:45.686462  491706 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:20:45.686487  491706 stop.go:39] StopHost: ha-220134-m04
	I0812 12:20:45.686871  491706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:45.686938  491706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:45.702723  491706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0812 12:20:45.703225  491706 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:45.703940  491706 main.go:141] libmachine: Using API Version  1
	I0812 12:20:45.703976  491706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:45.704401  491706 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:45.707345  491706 out.go:177] * Stopping node "ha-220134-m04"  ...
	I0812 12:20:45.708871  491706 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 12:20:45.708915  491706 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:20:45.709252  491706 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 12:20:45.709292  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:20:45.711953  491706 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:45.712420  491706 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:16:28 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:20:45.712458  491706 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:20:45.712551  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:20:45.712736  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:20:45.712933  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:20:45.713128  491706 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:20:45.800531  491706 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 12:20:45.854986  491706 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 12:20:45.909683  491706 main.go:141] libmachine: Stopping "ha-220134-m04"...
	I0812 12:20:45.909728  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:45.911470  491706 main.go:141] libmachine: (ha-220134-m04) Calling .Stop
	I0812 12:20:45.915420  491706 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 0/120
	I0812 12:20:47.080315  491706 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:20:47.081923  491706 main.go:141] libmachine: Machine "ha-220134-m04" was stopped.
	I0812 12:20:47.081945  491706 stop.go:75] duration metric: took 1.373080092s to stop
	I0812 12:20:47.081971  491706 stop.go:39] StopHost: ha-220134-m03
	I0812 12:20:47.082381  491706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:20:47.082442  491706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:20:47.098261  491706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0812 12:20:47.098786  491706 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:20:47.099366  491706 main.go:141] libmachine: Using API Version  1
	I0812 12:20:47.099388  491706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:20:47.099789  491706 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:20:47.102168  491706 out.go:177] * Stopping node "ha-220134-m03"  ...
	I0812 12:20:47.103682  491706 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 12:20:47.103726  491706 main.go:141] libmachine: (ha-220134-m03) Calling .DriverName
	I0812 12:20:47.104044  491706 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 12:20:47.104076  491706 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHHostname
	I0812 12:20:47.107572  491706 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:47.108123  491706 main.go:141] libmachine: (ha-220134-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:00:32", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:15:01 +0000 UTC Type:0 Mac:52:54:00:dc:00:32 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-220134-m03 Clientid:01:52:54:00:dc:00:32}
	I0812 12:20:47.108170  491706 main.go:141] libmachine: (ha-220134-m03) DBG | domain ha-220134-m03 has defined IP address 192.168.39.186 and MAC address 52:54:00:dc:00:32 in network mk-ha-220134
	I0812 12:20:47.108373  491706 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHPort
	I0812 12:20:47.108575  491706 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHKeyPath
	I0812 12:20:47.108736  491706 main.go:141] libmachine: (ha-220134-m03) Calling .GetSSHUsername
	I0812 12:20:47.108882  491706 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m03/id_rsa Username:docker}
	I0812 12:20:47.197762  491706 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 12:20:47.252446  491706 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 12:20:47.311223  491706 main.go:141] libmachine: Stopping "ha-220134-m03"...
	I0812 12:20:47.311258  491706 main.go:141] libmachine: (ha-220134-m03) Calling .GetState
	I0812 12:20:47.313078  491706 main.go:141] libmachine: (ha-220134-m03) Calling .Stop
	I0812 12:20:47.317205  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 0/120
	I0812 12:20:48.318750  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 1/120
	I0812 12:20:49.319996  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 2/120
	I0812 12:20:50.321580  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 3/120
	I0812 12:20:51.323280  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 4/120
	I0812 12:20:52.326340  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 5/120
	I0812 12:20:53.328753  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 6/120
	I0812 12:20:54.330336  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 7/120
	I0812 12:20:55.331679  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 8/120
	I0812 12:20:56.333384  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 9/120
	I0812 12:20:57.335819  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 10/120
	I0812 12:20:58.337272  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 11/120
	I0812 12:20:59.338857  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 12/120
	I0812 12:21:00.341376  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 13/120
	I0812 12:21:01.343090  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 14/120
	I0812 12:21:02.345611  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 15/120
	I0812 12:21:03.347026  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 16/120
	I0812 12:21:04.348533  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 17/120
	I0812 12:21:05.350404  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 18/120
	I0812 12:21:06.352197  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 19/120
	I0812 12:21:07.354481  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 20/120
	I0812 12:21:08.356114  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 21/120
	I0812 12:21:09.357721  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 22/120
	I0812 12:21:10.359321  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 23/120
	I0812 12:21:11.360896  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 24/120
	I0812 12:21:12.362997  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 25/120
	I0812 12:21:13.364672  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 26/120
	I0812 12:21:14.366311  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 27/120
	I0812 12:21:15.367772  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 28/120
	I0812 12:21:16.369503  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 29/120
	I0812 12:21:17.371310  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 30/120
	I0812 12:21:18.373128  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 31/120
	I0812 12:21:19.374528  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 32/120
	I0812 12:21:20.376126  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 33/120
	I0812 12:21:21.377627  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 34/120
	I0812 12:21:22.379263  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 35/120
	I0812 12:21:23.381416  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 36/120
	I0812 12:21:24.382718  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 37/120
	I0812 12:21:25.383985  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 38/120
	I0812 12:21:26.385472  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 39/120
	I0812 12:21:27.387223  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 40/120
	I0812 12:21:28.388756  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 41/120
	I0812 12:21:29.390091  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 42/120
	I0812 12:21:30.391599  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 43/120
	I0812 12:21:31.393142  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 44/120
	I0812 12:21:32.395021  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 45/120
	I0812 12:21:33.396738  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 46/120
	I0812 12:21:34.398362  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 47/120
	I0812 12:21:35.400140  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 48/120
	I0812 12:21:36.401599  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 49/120
	I0812 12:21:37.403400  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 50/120
	I0812 12:21:38.404778  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 51/120
	I0812 12:21:39.406214  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 52/120
	I0812 12:21:40.407880  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 53/120
	I0812 12:21:41.409291  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 54/120
	I0812 12:21:42.411331  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 55/120
	I0812 12:21:43.412651  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 56/120
	I0812 12:21:44.414412  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 57/120
	I0812 12:21:45.415912  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 58/120
	I0812 12:21:46.417267  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 59/120
	I0812 12:21:47.418786  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 60/120
	I0812 12:21:48.420252  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 61/120
	I0812 12:21:49.421700  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 62/120
	I0812 12:21:50.423098  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 63/120
	I0812 12:21:51.424467  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 64/120
	I0812 12:21:52.426545  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 65/120
	I0812 12:21:53.428554  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 66/120
	I0812 12:21:54.430046  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 67/120
	I0812 12:21:55.432068  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 68/120
	I0812 12:21:56.434483  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 69/120
	I0812 12:21:57.436117  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 70/120
	I0812 12:21:58.437652  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 71/120
	I0812 12:21:59.439062  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 72/120
	I0812 12:22:00.440487  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 73/120
	I0812 12:22:01.442121  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 74/120
	I0812 12:22:02.444182  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 75/120
	I0812 12:22:03.445854  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 76/120
	I0812 12:22:04.447427  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 77/120
	I0812 12:22:05.449114  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 78/120
	I0812 12:22:06.450688  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 79/120
	I0812 12:22:07.452540  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 80/120
	I0812 12:22:08.453847  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 81/120
	I0812 12:22:09.455449  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 82/120
	I0812 12:22:10.456870  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 83/120
	I0812 12:22:11.458434  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 84/120
	I0812 12:22:12.460418  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 85/120
	I0812 12:22:13.462003  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 86/120
	I0812 12:22:14.463533  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 87/120
	I0812 12:22:15.465253  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 88/120
	I0812 12:22:16.466633  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 89/120
	I0812 12:22:17.468617  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 90/120
	I0812 12:22:18.470289  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 91/120
	I0812 12:22:19.471766  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 92/120
	I0812 12:22:20.473371  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 93/120
	I0812 12:22:21.474938  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 94/120
	I0812 12:22:22.476516  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 95/120
	I0812 12:22:23.478006  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 96/120
	I0812 12:22:24.479483  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 97/120
	I0812 12:22:25.480955  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 98/120
	I0812 12:22:26.482347  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 99/120
	I0812 12:22:27.484330  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 100/120
	I0812 12:22:28.485885  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 101/120
	I0812 12:22:29.487556  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 102/120
	I0812 12:22:30.488862  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 103/120
	I0812 12:22:31.490419  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 104/120
	I0812 12:22:32.492630  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 105/120
	I0812 12:22:33.494031  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 106/120
	I0812 12:22:34.495687  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 107/120
	I0812 12:22:35.497197  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 108/120
	I0812 12:22:36.498739  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 109/120
	I0812 12:22:37.500803  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 110/120
	I0812 12:22:38.502201  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 111/120
	I0812 12:22:39.503947  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 112/120
	I0812 12:22:40.505602  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 113/120
	I0812 12:22:41.507978  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 114/120
	I0812 12:22:42.510130  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 115/120
	I0812 12:22:43.511568  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 116/120
	I0812 12:22:44.513161  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 117/120
	I0812 12:22:45.514786  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 118/120
	I0812 12:22:46.516248  491706 main.go:141] libmachine: (ha-220134-m03) Waiting for machine to stop 119/120
	I0812 12:22:47.517474  491706 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 12:22:47.517562  491706 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 12:22:47.519760  491706 out.go:177] 
	W0812 12:22:47.521239  491706 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 12:22:47.521255  491706 out.go:239] * 
	* 
	W0812 12:22:47.525059  491706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 12:22:47.526571  491706 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-220134 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220134 --wait=true -v=7 --alsologtostderr
E0812 12:25:44.616051  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-220134 --wait=true -v=7 --alsologtostderr: (3m48.840862699s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-220134
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220134 -n ha-220134
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220134 logs -n 25: (2.05241524s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m04 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp testdata/cp-test.txt                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m04_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03:/home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m03 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-220134 node stop m02 -v=7                                                    | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-220134 node start m02 -v=7                                                   | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-220134 -v=7                                                          | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-220134 -v=7                                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-220134 --wait=true -v=7                                                   | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:22 UTC | 12 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-220134                                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:26 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:22:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:22:47.577579  492160 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:22:47.577697  492160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:22:47.577706  492160 out.go:304] Setting ErrFile to fd 2...
	I0812 12:22:47.577711  492160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:22:47.577881  492160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:22:47.578421  492160 out.go:298] Setting JSON to false
	I0812 12:22:47.579508  492160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14699,"bootTime":1723450669,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:22:47.579578  492160 start.go:139] virtualization: kvm guest
	I0812 12:22:47.581885  492160 out.go:177] * [ha-220134] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:22:47.583618  492160 notify.go:220] Checking for updates...
	I0812 12:22:47.583635  492160 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:22:47.585293  492160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:22:47.586843  492160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:22:47.588201  492160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:22:47.589464  492160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:22:47.590868  492160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:22:47.592783  492160 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:22:47.592927  492160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:22:47.593417  492160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:22:47.593478  492160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:22:47.609584  492160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0812 12:22:47.610160  492160 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:22:47.610807  492160 main.go:141] libmachine: Using API Version  1
	I0812 12:22:47.610834  492160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:22:47.611182  492160 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:22:47.611359  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.650048  492160 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 12:22:47.651406  492160 start.go:297] selected driver: kvm2
	I0812 12:22:47.651425  492160 start.go:901] validating driver "kvm2" against &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:22:47.651648  492160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:22:47.652099  492160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:22:47.652194  492160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:22:47.668087  492160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:22:47.668835  492160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:22:47.668925  492160 cni.go:84] Creating CNI manager for ""
	I0812 12:22:47.668940  492160 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 12:22:47.669026  492160 start.go:340] cluster config:
	{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:22:47.669262  492160 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:22:47.671146  492160 out.go:177] * Starting "ha-220134" primary control-plane node in "ha-220134" cluster
	I0812 12:22:47.672616  492160 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:22:47.672663  492160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:22:47.672685  492160 cache.go:56] Caching tarball of preloaded images
	I0812 12:22:47.672802  492160 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:22:47.672817  492160 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:22:47.673008  492160 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:22:47.673269  492160 start.go:360] acquireMachinesLock for ha-220134: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:22:47.673318  492160 start.go:364] duration metric: took 27.094µs to acquireMachinesLock for "ha-220134"
	I0812 12:22:47.673338  492160 start.go:96] Skipping create...Using existing machine configuration
	I0812 12:22:47.673349  492160 fix.go:54] fixHost starting: 
	I0812 12:22:47.673656  492160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:22:47.673694  492160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:22:47.688733  492160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0812 12:22:47.689225  492160 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:22:47.689686  492160 main.go:141] libmachine: Using API Version  1
	I0812 12:22:47.689705  492160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:22:47.690066  492160 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:22:47.690281  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.690492  492160 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:22:47.692011  492160 fix.go:112] recreateIfNeeded on ha-220134: state=Running err=<nil>
	W0812 12:22:47.692047  492160 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 12:22:47.694084  492160 out.go:177] * Updating the running kvm2 "ha-220134" VM ...
	I0812 12:22:47.695575  492160 machine.go:94] provisionDockerMachine start ...
	I0812 12:22:47.695603  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.695891  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.698639  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.699128  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.699159  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.699303  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.699526  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.699722  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.699862  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.700029  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.700264  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.700282  492160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 12:22:47.806711  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:22:47.806746  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:47.807039  492160 buildroot.go:166] provisioning hostname "ha-220134"
	I0812 12:22:47.807072  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:47.807291  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.810186  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.810609  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.810642  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.810822  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.811033  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.811201  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.811358  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.811523  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.811724  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.811739  492160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134 && echo "ha-220134" | sudo tee /etc/hostname
	I0812 12:22:47.929823  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:22:47.929864  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.932830  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.933395  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.933426  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.933597  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.933809  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.933961  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.934075  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.934240  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.934447  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.934468  492160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:22:48.038517  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:22:48.038549  492160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:22:48.038597  492160 buildroot.go:174] setting up certificates
	I0812 12:22:48.038609  492160 provision.go:84] configureAuth start
	I0812 12:22:48.038621  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:48.038921  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:22:48.041886  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.042253  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.042276  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.042500  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.044897  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.045352  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.045392  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.045519  492160 provision.go:143] copyHostCerts
	I0812 12:22:48.045554  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:22:48.045615  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:22:48.045627  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:22:48.045710  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:22:48.045834  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:22:48.045863  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:22:48.045872  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:22:48.045914  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:22:48.045990  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:22:48.046015  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:22:48.046032  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:22:48.046069  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:22:48.046154  492160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134 san=[127.0.0.1 192.168.39.228 ha-220134 localhost minikube]
	I0812 12:22:48.409906  492160 provision.go:177] copyRemoteCerts
	I0812 12:22:48.410000  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:22:48.410035  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.413269  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.413768  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.413806  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.413972  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:48.414212  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.414401  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:48.414536  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:22:48.497161  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:22:48.497243  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:22:48.525612  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:22:48.525767  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:22:48.553052  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:22:48.553154  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0812 12:22:48.578935  492160 provision.go:87] duration metric: took 540.309638ms to configureAuth
	I0812 12:22:48.578971  492160 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:22:48.579236  492160 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:22:48.579334  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.582106  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.582595  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.582631  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.582756  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:48.582969  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.583143  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.583306  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:48.583475  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:48.583690  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:48.583713  492160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:24:19.441316  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:24:19.441357  492160 machine.go:97] duration metric: took 1m31.745762394s to provisionDockerMachine
	I0812 12:24:19.441374  492160 start.go:293] postStartSetup for "ha-220134" (driver="kvm2")
	I0812 12:24:19.441395  492160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:24:19.441422  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.441852  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:24:19.441890  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.445403  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.445945  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.445969  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.446128  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.446374  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.446571  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.446734  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.528994  492160 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:24:19.533473  492160 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:24:19.533504  492160 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:24:19.533583  492160 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:24:19.533686  492160 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:24:19.533700  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:24:19.533830  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:24:19.544000  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:24:19.568869  492160 start.go:296] duration metric: took 127.477266ms for postStartSetup
	I0812 12:24:19.568922  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.569260  492160 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0812 12:24:19.569293  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.572177  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.572646  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.572676  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.572837  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.573032  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.573244  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.573409  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	W0812 12:24:19.651288  492160 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0812 12:24:19.651315  492160 fix.go:56] duration metric: took 1m31.977968081s for fixHost
	I0812 12:24:19.651339  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.654426  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.654827  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.654853  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.654990  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.655193  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.655335  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.655446  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.655648  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:24:19.655868  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:24:19.655880  492160 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:24:19.758356  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723465459.723555354
	
	I0812 12:24:19.758386  492160 fix.go:216] guest clock: 1723465459.723555354
	I0812 12:24:19.758396  492160 fix.go:229] Guest: 2024-08-12 12:24:19.723555354 +0000 UTC Remote: 2024-08-12 12:24:19.651322372 +0000 UTC m=+92.113335850 (delta=72.232982ms)
	I0812 12:24:19.758427  492160 fix.go:200] guest clock delta is within tolerance: 72.232982ms
	I0812 12:24:19.758445  492160 start.go:83] releasing machines lock for "ha-220134", held for 1m32.085108085s
	I0812 12:24:19.758478  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.758780  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:24:19.761524  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.761939  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.761967  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.762132  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762675  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762904  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762993  492160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:24:19.763037  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.763164  492160 ssh_runner.go:195] Run: cat /version.json
	I0812 12:24:19.763193  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.765751  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766007  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766153  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.766181  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766349  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.766440  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.766467  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766539  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.766659  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.766753  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.766843  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.766894  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.766961  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.767132  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.861425  492160 ssh_runner.go:195] Run: systemctl --version
	I0812 12:24:19.867724  492160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:24:20.027344  492160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:24:20.035531  492160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:24:20.035621  492160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:24:20.044904  492160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 12:24:20.044932  492160 start.go:495] detecting cgroup driver to use...
	I0812 12:24:20.044998  492160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:24:20.060670  492160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:24:20.074880  492160 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:24:20.074956  492160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:24:20.088592  492160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:24:20.103020  492160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:24:20.255655  492160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:24:20.400241  492160 docker.go:233] disabling docker service ...
	I0812 12:24:20.400332  492160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:24:20.416652  492160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:24:20.430546  492160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:24:20.577347  492160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:24:20.724552  492160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:24:20.738895  492160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:24:20.760004  492160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:24:20.760090  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.771013  492160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:24:20.771107  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.783866  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.795411  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.806539  492160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:24:20.819040  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.830381  492160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.844202  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.855431  492160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:24:20.865796  492160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:24:20.876375  492160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:24:21.041154  492160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:24:21.380071  492160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:24:21.380159  492160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:24:21.385847  492160 start.go:563] Will wait 60s for crictl version
	I0812 12:24:21.385922  492160 ssh_runner.go:195] Run: which crictl
	I0812 12:24:21.389853  492160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:24:21.427848  492160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:24:21.427949  492160 ssh_runner.go:195] Run: crio --version
	I0812 12:24:21.457881  492160 ssh_runner.go:195] Run: crio --version
	I0812 12:24:21.488479  492160 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:24:21.489996  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:24:21.492937  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:21.493354  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:21.493381  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:21.493629  492160 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:24:21.498630  492160 kubeadm.go:883] updating cluster {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:24:21.498784  492160 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:24:21.498836  492160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:24:21.544751  492160 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:24:21.544779  492160 crio.go:433] Images already preloaded, skipping extraction
	I0812 12:24:21.544835  492160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:24:21.578970  492160 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:24:21.579001  492160 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:24:21.579012  492160 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.30.3 crio true true} ...
	I0812 12:24:21.579136  492160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:24:21.579212  492160 ssh_runner.go:195] Run: crio config
	I0812 12:24:21.632266  492160 cni.go:84] Creating CNI manager for ""
	I0812 12:24:21.632297  492160 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 12:24:21.632317  492160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:24:21.632355  492160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220134 NodeName:ha-220134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:24:21.632499  492160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220134"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:24:21.632522  492160 kube-vip.go:115] generating kube-vip config ...
	I0812 12:24:21.632583  492160 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:24:21.645137  492160 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:24:21.645258  492160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:24:21.645324  492160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:24:21.654802  492160 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:24:21.654874  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 12:24:21.664328  492160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0812 12:24:21.680594  492160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:24:21.696758  492160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0812 12:24:21.713188  492160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:24:21.730954  492160 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:24:21.735833  492160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:24:21.883350  492160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:24:21.898343  492160 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.228
	I0812 12:24:21.898374  492160 certs.go:194] generating shared ca certs ...
	I0812 12:24:21.898395  492160 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.898591  492160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:24:21.898651  492160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:24:21.898664  492160 certs.go:256] generating profile certs ...
	I0812 12:24:21.898766  492160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:24:21.898801  492160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa
	I0812 12:24:21.898832  492160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.186 192.168.39.254]
	I0812 12:24:21.968565  492160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa ...
	I0812 12:24:21.968600  492160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa: {Name:mk7f492d864eb7efe6c3a76c18877669259706b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.968808  492160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa ...
	I0812 12:24:21.968828  492160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa: {Name:mk977dc6aa6dfea27e78b42a178ab60052c7c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.968925  492160 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:24:21.969131  492160 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:24:21.969325  492160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:24:21.969346  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:24:21.969393  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:24:21.969413  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:24:21.969431  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:24:21.969444  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:24:21.969469  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:24:21.969484  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:24:21.969501  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:24:21.969570  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:24:21.969611  492160 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:24:21.969625  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:24:21.969655  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:24:21.969686  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:24:21.969715  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:24:21.969774  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:24:21.969822  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:24:21.969843  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:21.969866  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:24:21.970471  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:24:21.996863  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:24:22.022866  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:24:22.049041  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:24:22.080685  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 12:24:22.105639  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:24:22.129654  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:24:22.153960  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:24:22.177868  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:24:22.202086  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:24:22.227429  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:24:22.252771  492160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:24:22.269851  492160 ssh_runner.go:195] Run: openssl version
	I0812 12:24:22.275726  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:24:22.286202  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.290576  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.290649  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.296152  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:24:22.305633  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:24:22.316559  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.321344  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.321413  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.327667  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:24:22.338408  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:24:22.349998  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.354775  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.354848  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.360709  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:24:22.370584  492160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:24:22.375445  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 12:24:22.381375  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 12:24:22.387243  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 12:24:22.392999  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 12:24:22.398995  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 12:24:22.404530  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 12:24:22.410194  492160 kubeadm.go:392] StartCluster: {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:24:22.410318  492160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:24:22.410378  492160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:24:22.446529  492160 cri.go:89] found id: "735980f5d449557f871b4e20c9f7dfdf9956d199a5945569a2f1c83aae8bdd3a"
	I0812 12:24:22.446561  492160 cri.go:89] found id: "dbbecf0729bcf7a0b2025d25cc36fb9931f9ffe13a952e3da5528edc643af2ac"
	I0812 12:24:22.446566  492160 cri.go:89] found id: "7081d52c1eb4ab20d6c5b56c16344435565fcebc6d1995e156a8b868152a1a2c"
	I0812 12:24:22.446569  492160 cri.go:89] found id: "58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf"
	I0812 12:24:22.446571  492160 cri.go:89] found id: "d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691"
	I0812 12:24:22.446575  492160 cri.go:89] found id: "ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa"
	I0812 12:24:22.446577  492160 cri.go:89] found id: "43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e"
	I0812 12:24:22.446580  492160 cri.go:89] found id: "4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8"
	I0812 12:24:22.446582  492160 cri.go:89] found id: "61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4"
	I0812 12:24:22.446588  492160 cri.go:89] found id: "3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99"
	I0812 12:24:22.446603  492160 cri.go:89] found id: "d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9"
	I0812 12:24:22.446606  492160 cri.go:89] found id: "e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768"
	I0812 12:24:22.446609  492160 cri.go:89] found id: ""
	I0812 12:24:22.446652  492160 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.147361358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465597147335852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6f3fd4b-86a9-4354-9b0d-865547ddb283 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.148199966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc1e5c2c-3bdb-491d-aae7-aa3344bf0140 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.148323209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc1e5c2c-3bdb-491d-aae7-aa3344bf0140 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.148693744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a
7f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.por
ts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2201
34,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440d
cd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b
766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa50
01a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce
67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc1e5c2c-3bdb-491d-aae7-aa3344bf0140 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.202446470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59851cec-a2e0-45ae-a8ec-90c1bf9bb400 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.202575014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59851cec-a2e0-45ae-a8ec-90c1bf9bb400 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.203938588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7881ff68-116a-4010-b027-d90a7c7939ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.204589238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465597204563898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7881ff68-116a-4010-b027-d90a7c7939ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.205043543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7022848c-4b88-4bec-ad79-4e20554d6f22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.205111100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7022848c-4b88-4bec-ad79-4e20554d6f22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.205574837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a
7f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.por
ts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2201
34,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440d
cd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b
766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa50
01a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce
67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7022848c-4b88-4bec-ad79-4e20554d6f22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.253401739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=387568f0-d4e3-4311-90e2-cd03ad78d83e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.253502722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=387568f0-d4e3-4311-90e2-cd03ad78d83e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.255790679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e12402e9-04ac-45dd-8754-18d0bd15e2f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.256493630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465597256459139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e12402e9-04ac-45dd-8754-18d0bd15e2f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.256979566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2da4ca41-9846-43fb-a4d7-930fc770b39c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.257063312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2da4ca41-9846-43fb-a4d7-930fc770b39c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.257806198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a
7f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.por
ts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2201
34,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440d
cd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b
766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa50
01a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce
67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2da4ca41-9846-43fb-a4d7-930fc770b39c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.314253861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=48d0eb31-89bd-444f-865a-65f3b73a04ac name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.314432302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=48d0eb31-89bd-444f-865a-65f3b73a04ac name=/runtime.v1.RuntimeService/Version
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.315502767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71b48b38-ebc0-4db3-be1b-d22db4a9ef65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.316011037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465597315984430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71b48b38-ebc0-4db3-be1b-d22db4a9ef65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.316611488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=100849d1-3d99-4c0d-aa10-46c64a86d4f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.316669781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=100849d1-3d99-4c0d-aa10-46c64a86d4f9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:26:37 ha-220134 crio[3945]: time="2024-08-12 12:26:37.317150300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contai
nerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a
7f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.por
ts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-2201
34,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440d
cd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b
766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa50
01a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c
04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce
67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=100849d1-3d99-4c0d-aa10-46c64a86d4f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3f8928354b6b4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   6569ef6537c27       kube-apiserver-ha-220134
	ec85593ac4c79       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a4a6b470c70ab       busybox-fc5497c4f-qh8vv
	af4d1d3b9cb94       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   71fc1c2740167       kube-controller-manager-ha-220134
	c5dcc1f027fb3       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   d3e2acfe2b290       kube-vip-ha-220134
	9ab4af4cc0977       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   5646a54fc7ad2       storage-provisioner
	3db6a6687be27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   167da13cfce58       coredns-7db6d8ff4d-t8pg7
	c0e5686d9b943       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e04de298a51b8       coredns-7db6d8ff4d-mtqtk
	bf25bb773e691       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      2 minutes ago        Running             kindnet-cni               1                   3a275d5d9110e       kindnet-mh4sv
	aa48279fba2d2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   75fa020cbc0ea       kube-proxy-zcgh8
	14dbe639118d7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   71fc1c2740167       kube-controller-manager-ha-220134
	6509beb3fc8e0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   94d077c767449       etcd-ha-220134
	d63c5a66780a6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   f1664d03896ff       kube-scheduler-ha-220134
	f2a17dd84b58b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   6569ef6537c27       kube-apiserver-ha-220134
	fd5e5f2f3e8c9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   d0ae8920356aa       busybox-fc5497c4f-qh8vv
	58c1b0454a4f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   2c5c191b44764       coredns-7db6d8ff4d-t8pg7
	d6bc464a808be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   c1f343a193477       coredns-7db6d8ff4d-mtqtk
	ec1c98b0147f2       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    14 minutes ago       Exited              kindnet-cni               0                   6bb5cf25bace5       kindnet-mh4sv
	43dd48710573d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   d3f2e966dc4ec       kube-proxy-zcgh8
	3b386f478bcd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   e773728876a09       etcd-ha-220134
	e302617a6e799       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   36c1552f9acff       kube-scheduler-ha-220134
	
	
	==> coredns [3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9] <==
	[INFO] plugin/kubernetes: Trace[802006662]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 12:24:30.586) (total time: 10000ms):
	Trace[802006662]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (12:24:40.587)
	Trace[802006662]: [10.000973733s] [10.000973733s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf] <==
	[INFO] 10.244.0.4:43198 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083609s
	[INFO] 10.244.2.2:44558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149393s
	[INFO] 10.244.2.2:54267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000289357s
	[INFO] 10.244.2.2:36401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192313s
	[INFO] 10.244.2.2:47805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012737375s
	[INFO] 10.244.2.2:52660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213917s
	[INFO] 10.244.2.2:56721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019118s
	[INFO] 10.244.1.2:46713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180271s
	[INFO] 10.244.1.2:45630 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117989s
	[INFO] 10.244.1.2:36911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001707s
	[INFO] 10.244.2.2:55073 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.2.2:37969 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010618s
	[INFO] 10.244.1.2:57685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225366s
	[INFO] 10.244.1.2:52755 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103176s
	[INFO] 10.244.0.4:52936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131913s
	[INFO] 10.244.0.4:57415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055098s
	[INFO] 10.244.2.2:48523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363461s
	[INFO] 10.244.1.2:41861 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150101s
	[INFO] 10.244.0.4:60137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147895s
	[INFO] 10.244.0.4:46681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070169s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1185666084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 12:24:30.237) (total time: 10002ms):
	Trace[1185666084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:24:40.239)
	Trace[1185666084]: [10.002143427s] [10.002143427s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40864->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38224->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38224->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691] <==
	[INFO] 10.244.0.4:52443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019087s
	[INFO] 10.244.0.4:57191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001115s
	[INFO] 10.244.0.4:36774 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249129s
	[INFO] 10.244.0.4:36176 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018293s
	[INFO] 10.244.0.4:52138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073249s
	[INFO] 10.244.0.4:52765 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054999s
	[INFO] 10.244.2.2:35368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110859s
	[INFO] 10.244.2.2:55727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119256s
	[INFO] 10.244.1.2:45598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120462s
	[INFO] 10.244.1.2:57257 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000297797s
	[INFO] 10.244.0.4:48236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152091s
	[INFO] 10.244.0.4:40466 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098727s
	[INFO] 10.244.2.2:37067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001712s
	[INFO] 10.244.2.2:54242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014178s
	[INFO] 10.244.2.2:41816 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019482s
	[INFO] 10.244.1.2:42291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335455s
	[INFO] 10.244.1.2:33492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078001s
	[INFO] 10.244.1.2:52208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005886s
	[INFO] 10.244.0.4:55618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005463s
	[INFO] 10.244.0.4:59573 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079101s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-220134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:12:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:26:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:25:07 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:25:07 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:25:07 +0000   Mon, 12 Aug 2024 12:12:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:25:07 +0000   Mon, 12 Aug 2024 12:12:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-220134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b36c448dca9a4512802dabd6b631307b
	  System UUID:                b36c448d-ca9a-4512-802d-abd6b631307b
	  Boot ID:                    b1858840-6bc1-4ad6-872f-13825f26f2e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qh8vv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-mtqtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-t8pg7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-220134                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-mh4sv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-220134             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-220134    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-zcgh8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-220134             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-220134                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 90s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-220134 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-220134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-220134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-220134 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Warning  ContainerGCFailed        2m25s (x2 over 3m25s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           79s                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           78s                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	
	
	Name:               ha-220134-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:14:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:26:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-220134-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ab5f23e5e3d4308ad21378e16e05f36
	  System UUID:                5ab5f23e-5e3d-4308-ad21-378e16e05f36
	  Boot ID:                    4b12cc87-77d4-4a02-89b2-18398058ad76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9hhl4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-220134-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-52flt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-220134-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-220134-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-bs72f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-220134-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-220134-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 86s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  NodeNotReady             8m34s                node-controller  Node ha-220134-m02 status is now: NodeNotReady
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           79s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           78s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           30s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	
	
	Name:               ha-220134-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_15_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:26:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:26:13 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:26:13 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:26:13 +0000   Mon, 12 Aug 2024 12:15:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:26:13 +0000   Mon, 12 Aug 2024 12:15:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-220134-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4ec5658a50d452880d7dcb7c738e134
	  System UUID:                d4ec5658-a50d-4528-80d7-dcb7c738e134
	  Boot ID:                    b6937c2b-c35f-4d6f-bc74-20f36a278584
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-82gr9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-220134-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-5rpgt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-220134-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-220134-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-frf96                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-220134-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-220134-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-220134-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal   RegisteredNode           79s                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal   RegisteredNode           78s                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-220134-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-220134-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-220134-m03 has been rebooted, boot id: b6937c2b-c35f-4d6f-bc74-20f36a278584
	  Normal   RegisteredNode           30s                node-controller  Node ha-220134-m03 event: Registered Node ha-220134-m03 in Controller
	
	
	Name:               ha-220134-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:16:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:26:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:26:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:26:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:26:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:26:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-220134-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 faa5c8215a114c109397b8051f5bfb12
	  System UUID:                faa5c821-5a11-4c10-9397-b8051f5bfb12
	  Boot ID:                    8f925e92-9813-4393-b910-64fefb6efe12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zcp4c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m54s
	  kube-system                 kube-proxy-s6pvf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m54s (x2 over 9m54s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m54s (x2 over 9m54s)  kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m54s (x2 over 9m54s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           9m49s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   NodeReady                9m32s                  kubelet          Node ha-220134-m04 status is now: NodeReady
	  Normal   RegisteredNode           79s                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           78s                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   NodeNotReady             39s                    node-controller  Node ha-220134-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)        kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)        kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)        kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)        kubelet          Node ha-220134-m04 has been rebooted, boot id: 8f925e92-9813-4393-b910-64fefb6efe12
	  Normal   NodeReady                8s (x2 over 8s)        kubelet          Node ha-220134-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057678] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.199862] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.121638] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281974] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.332937] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060566] kauditd_printk_skb: 130 callbacks suppressed
	[Aug12 12:12] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.913038] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.066004] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.086767] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.012156] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.877478] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 12:14] kauditd_printk_skb: 26 callbacks suppressed
	[Aug12 12:21] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 12:24] systemd-fstab-generator[3847]: Ignoring "noauto" option for root device
	[  +0.089398] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060948] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +0.172461] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.153298] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[  +0.291344] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.859510] systemd-fstab-generator[4035]: Ignoring "noauto" option for root device
	[  +3.526818] kauditd_printk_skb: 171 callbacks suppressed
	[ +10.883583] kauditd_printk_skb: 35 callbacks suppressed
	[ +11.172198] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 12:25] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99] <==
	{"level":"warn","ts":"2024-08-12T12:22:48.734946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:22:48.260876Z","time spent":"474.045618ms","remote":"127.0.0.1:58030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/08/12 12:22:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T12:22:48.734231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.245581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T12:22:48.735663Z","caller":"traceutil/trace.go:171","msg":"trace[1410345890] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"281.037013ms","start":"2024-08-12T12:22:48.454619Z","end":"2024-08-12T12:22:48.735656Z","steps":["trace[1410345890] 'agreement among raft nodes before linearized reading'  (duration: 268.236373ms)"],"step_count":1}
	2024/08/12 12:22:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T12:22:48.996244Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.228:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:22:48.996411Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.228:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T12:22:48.997987Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"19024f543fef3d0c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-12T12:22:48.998148Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998165Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998194Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998251Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998343Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.99847Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998515Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998525Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.99855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998662Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.99869Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998729Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:49.00282Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-08-12T12:22:49.002959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-08-12T12:22:49.00297Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-220134","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.228:2380"],"advertise-client-urls":["https://192.168.39.228:2379"]}
	
	
	==> etcd [6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba] <==
	{"level":"warn","ts":"2024-08-12T12:25:37.604447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.614513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.616749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.618595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.679684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.721545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:37.780059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"19024f543fef3d0c","from":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-12T12:25:40.008739Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.186:2380/version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:40.008823Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:41.285173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"21d78cb68f18ad2f","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:41.28525Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"21d78cb68f18ad2f","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:44.010748Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.186:2380/version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:44.010833Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:46.285313Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"21d78cb68f18ad2f","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:46.285414Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"21d78cb68f18ad2f","rtt":"0s","error":"dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:48.013895Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.186:2380/version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-12T12:25:48.013941Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"21d78cb68f18ad2f","error":"Get \"https://192.168.39.186:2380/version\": dial tcp 192.168.39.186:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-12T12:25:50.209667Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:50.209737Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:50.212958Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:50.235753Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"19024f543fef3d0c","to":"21d78cb68f18ad2f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-12T12:25:50.235819Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:50.250707Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"19024f543fef3d0c","to":"21d78cb68f18ad2f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-12T12:25:50.250834Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:58.390591Z","caller":"traceutil/trace.go:171","msg":"trace[459603598] transaction","detail":"{read_only:false; response_revision:2454; number_of_response:1; }","duration":"108.005121ms","start":"2024-08-12T12:25:58.282556Z","end":"2024-08-12T12:25:58.390561Z","steps":["trace[459603598] 'process raft request'  (duration: 104.50283ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:26:38 up 15 min,  0 users,  load average: 0.67, 0.53, 0.32
	Linux ha-220134 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945] <==
	I0812 12:26:06.813046       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:26:16.814482       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:26:16.814540       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:26:16.814715       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:26:16.814755       1 main.go:299] handling current node
	I0812 12:26:16.814769       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:26:16.814786       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:26:16.814866       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:26:16.814897       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:26:26.811564       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:26:26.811817       1 main.go:299] handling current node
	I0812 12:26:26.811869       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:26:26.811891       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:26:26.812095       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:26:26.812145       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:26:26.812337       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:26:26.812373       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:26:36.814921       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:26:36.815339       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:26:36.815909       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:26:36.816140       1 main.go:299] handling current node
	I0812 12:26:36.816375       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:26:36.816482       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:26:36.816840       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:26:36.816965       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa] <==
	I0812 12:22:22.006576       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:22.006681       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:22.006849       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:22.006873       1 main.go:299] handling current node
	I0812 12:22:22.006894       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:22.006916       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:22.007011       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:22.007042       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:22:31.997821       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:31.998020       1 main.go:299] handling current node
	I0812 12:22:31.998121       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:31.998128       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:31.998733       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:31.998760       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:22:31.998945       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:31.998969       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:42.001356       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:42.001408       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:42.001566       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:42.001593       1 main.go:299] handling current node
	I0812 12:22:42.001607       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:42.001611       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:42.001661       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:42.001683       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	E0812 12:22:47.156971       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e] <==
	I0812 12:25:06.215571       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:25:06.215650       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:25:06.307472       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 12:25:06.307523       1 policy_source.go:224] refreshing policies
	I0812 12:25:06.313765       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 12:25:06.314259       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 12:25:06.315913       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 12:25:06.320403       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 12:25:06.320758       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 12:25:06.324747       1 aggregator.go:165] initial CRD sync complete...
	I0812 12:25:06.324819       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 12:25:06.324845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 12:25:06.332744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0812 12:25:06.354230       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.215]
	I0812 12:25:06.355609       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:25:06.382831       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0812 12:25:06.390854       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0812 12:25:06.392479       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 12:25:06.413016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 12:25:06.413136       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 12:25:06.428612       1 cache.go:39] Caches are synced for autoregister controller
	I0812 12:25:06.422677       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 12:25:07.220627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 12:25:07.614891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.215 192.168.39.228]
	W0812 12:25:17.633091       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.228]
	
	
	==> kube-apiserver [f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e] <==
	I0812 12:24:25.749413       1 options.go:221] external host was not specified, using 192.168.39.228
	I0812 12:24:25.761668       1 server.go:148] Version: v1.30.3
	I0812 12:24:25.761735       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:24:26.430954       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 12:24:26.450484       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 12:24:26.450609       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 12:24:26.450890       1 instance.go:299] Using reconciler: lease
	I0812 12:24:26.451809       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0812 12:24:46.430577       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 12:24:46.430577       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 12:24:46.452337       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0812 12:24:46.452337       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e] <==
	I0812 12:24:26.734644       1 serving.go:380] Generated self-signed cert in-memory
	I0812 12:24:27.166259       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 12:24:27.166348       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:24:27.176885       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:24:27.177057       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:24:27.177550       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 12:24:27.177694       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0812 12:24:47.458955       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.228:8443/healthz\": dial tcp 192.168.39.228:8443: connect: connection refused"
	
	
	==> kube-controller-manager [af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823] <==
	I0812 12:25:19.410754       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 12:25:19.423995       1 shared_informer.go:320] Caches are synced for taint
	I0812 12:25:19.424166       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0812 12:25:19.424531       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m02"
	I0812 12:25:19.424768       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m03"
	I0812 12:25:19.424803       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134-m04"
	I0812 12:25:19.424844       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220134"
	I0812 12:25:19.425849       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0812 12:25:19.427569       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 12:25:19.826894       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 12:25:19.826947       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 12:25:19.860191       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 12:25:29.358331       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-nvc9d\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 12:25:29.360572       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"12078584-fc59-4d04-a0c4-2e588b785852", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-nvc9d": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:25:29.375833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.880789ms"
	I0812 12:25:29.376609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="101.399µs"
	I0812 12:25:32.775998       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-nvc9d\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 12:25:32.777746       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"12078584-fc59-4d04-a0c4-2e588b785852", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-nvc9d": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:25:32.779917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.100116ms"
	I0812 12:25:32.783917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89.173µs"
	I0812 12:25:43.647808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.764067ms"
	I0812 12:25:43.648001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.029µs"
	I0812 12:26:05.909490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.712186ms"
	I0812 12:26:05.909645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.689µs"
	I0812 12:26:29.800578       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220134-m04"
	
	
	==> kube-proxy [43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e] <==
	E0812 12:21:25.379915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:25.379990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:25.380035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.098824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.098912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.098848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.098943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.099085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.099188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:40.996051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:40.996562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:40.996703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:40.996894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:44.068359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:44.068497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:59.428682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:59.428804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:05.572434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:05.572495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:05.572648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:05.572685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:36.292017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:36.292799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:48.579789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:48.579914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56] <==
	I0812 12:24:26.437535       1 server_linux.go:69] "Using iptables proxy"
	E0812 12:24:26.887569       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:29.955138       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:33.027442       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:39.171015       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:48.387125       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 12:25:06.823695       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	I0812 12:25:06.920535       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:25:06.920689       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:25:06.920733       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:25:06.929944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:25:06.930216       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:25:06.930252       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:25:06.935197       1 config.go:192] "Starting service config controller"
	I0812 12:25:06.935320       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:25:06.935357       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:25:06.935416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:25:06.941907       1 config.go:319] "Starting node config controller"
	I0812 12:25:06.941949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:25:07.037049       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:25:07.037333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:25:07.042123       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74] <==
	W0812 12:25:01.855944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:01.856006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:02.187525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.228:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:02.187624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.228:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:02.299686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.228:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:02.299809       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.228:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:03.144867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.228:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:03.144944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.228:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:03.760169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:03.760228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.034888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.228:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.035027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.228:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.053791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.228:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.053895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.228:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.309833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.228:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.309876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.228:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:06.243382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 12:25:06.245659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 12:25:06.245683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:25:06.245804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:25:06.245599       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:25:06.245889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 12:25:06.245584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:25:06.245942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0812 12:25:07.967709       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768] <==
	E0812 12:22:44.361973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:44.898374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:44.898426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:45.195091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 12:22:45.195147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 12:22:45.336684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 12:22:45.336743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 12:22:45.368251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 12:22:45.368430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 12:22:46.130202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:46.130233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:46.274686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 12:22:46.274736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 12:22:46.280119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 12:22:46.280230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 12:22:46.412994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:22:46.413099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:22:46.924043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:22:46.924073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:22:47.270415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:47.270450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0812 12:22:48.685041       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0812 12:22:48.685180       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:22:48.685375       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0812 12:22:48.685586       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 12 12:25:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:25:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:25:12 ha-220134 kubelet[1373]: I0812 12:25:12.327198    1373 scope.go:117] "RemoveContainer" containerID="7081d52c1eb4ab20d6c5b56c16344435565fcebc6d1995e156a8b868152a1a2c"
	Aug 12 12:25:13 ha-220134 kubelet[1373]: I0812 12:25:13.274123    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:25:13 ha-220134 kubelet[1373]: E0812 12:25:13.274558    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:25:25 ha-220134 kubelet[1373]: I0812 12:25:25.070640    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-qh8vv" podStartSLOduration=558.400337467 podStartE2EDuration="9m21.070596293s" podCreationTimestamp="2024-08-12 12:16:04 +0000 UTC" firstStartedPulling="2024-08-12 12:16:05.433741098 +0000 UTC m=+233.286339419" lastFinishedPulling="2024-08-12 12:16:08.103999919 +0000 UTC m=+235.956598245" observedRunningTime="2024-08-12 12:16:09.265841202 +0000 UTC m=+237.118439544" watchObservedRunningTime="2024-08-12 12:25:25.070596293 +0000 UTC m=+792.923194615"
	Aug 12 12:25:26 ha-220134 kubelet[1373]: I0812 12:25:26.275125    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:25:26 ha-220134 kubelet[1373]: E0812 12:25:26.275392    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:25:37 ha-220134 kubelet[1373]: I0812 12:25:37.274563    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:25:37 ha-220134 kubelet[1373]: E0812 12:25:37.274850    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:25:51 ha-220134 kubelet[1373]: I0812 12:25:51.274615    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:25:51 ha-220134 kubelet[1373]: E0812 12:25:51.275003    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:25:58 ha-220134 kubelet[1373]: I0812 12:25:58.275874    1373 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-220134" podUID="393b98a5-fa45-458d-9d14-b74f09c9384a"
	Aug 12 12:25:58 ha-220134 kubelet[1373]: I0812 12:25:58.410263    1373 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-220134"
	Aug 12 12:26:06 ha-220134 kubelet[1373]: I0812 12:26:06.276109    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:26:06 ha-220134 kubelet[1373]: E0812 12:26:06.276667    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:26:12 ha-220134 kubelet[1373]: E0812 12:26:12.306559    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:26:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:26:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:26:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:26:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:26:21 ha-220134 kubelet[1373]: I0812 12:26:21.274899    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:26:21 ha-220134 kubelet[1373]: E0812 12:26:21.275757    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	Aug 12 12:26:35 ha-220134 kubelet[1373]: I0812 12:26:35.274738    1373 scope.go:117] "RemoveContainer" containerID="9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57"
	Aug 12 12:26:35 ha-220134 kubelet[1373]: E0812 12:26:35.275352    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bca65bc5-3ba1-44be-8606-f8235cf9b3d0)\"" pod="kube-system/storage-provisioner" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 12:26:36.796755  493436 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19411-463103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220134 -n ha-220134
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (353.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 stop -v=7 --alsologtostderr: exit status 82 (2m0.502202081s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220134-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:26:56.587771  493845 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:26:56.587887  493845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:26:56.587892  493845 out.go:304] Setting ErrFile to fd 2...
	I0812 12:26:56.587898  493845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:26:56.588085  493845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:26:56.588318  493845 out.go:298] Setting JSON to false
	I0812 12:26:56.588404  493845 mustload.go:65] Loading cluster: ha-220134
	I0812 12:26:56.588767  493845 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:26:56.588852  493845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:26:56.589029  493845 mustload.go:65] Loading cluster: ha-220134
	I0812 12:26:56.589180  493845 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:26:56.589202  493845 stop.go:39] StopHost: ha-220134-m04
	I0812 12:26:56.589607  493845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:26:56.589685  493845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:26:56.606210  493845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0812 12:26:56.606947  493845 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:26:56.607588  493845 main.go:141] libmachine: Using API Version  1
	I0812 12:26:56.607622  493845 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:26:56.608075  493845 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:26:56.610752  493845 out.go:177] * Stopping node "ha-220134-m04"  ...
	I0812 12:26:56.612166  493845 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0812 12:26:56.612218  493845 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:26:56.612535  493845 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0812 12:26:56.612570  493845 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:26:56.616369  493845 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:26:56.616901  493845 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:26:24 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:26:56.616932  493845 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:26:56.617126  493845 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:26:56.617347  493845 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:26:56.617543  493845 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:26:56.617709  493845 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	I0812 12:26:56.704413  493845 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0812 12:26:56.759059  493845 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0812 12:26:56.811789  493845 main.go:141] libmachine: Stopping "ha-220134-m04"...
	I0812 12:26:56.811833  493845 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:26:56.813628  493845 main.go:141] libmachine: (ha-220134-m04) Calling .Stop
	I0812 12:26:56.817541  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 0/120
	I0812 12:26:57.819710  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 1/120
	I0812 12:26:58.821877  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 2/120
	I0812 12:26:59.824051  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 3/120
	I0812 12:27:00.825505  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 4/120
	I0812 12:27:01.827781  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 5/120
	I0812 12:27:02.829221  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 6/120
	I0812 12:27:03.830740  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 7/120
	I0812 12:27:04.832616  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 8/120
	I0812 12:27:05.834508  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 9/120
	I0812 12:27:06.836372  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 10/120
	I0812 12:27:07.837934  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 11/120
	I0812 12:27:08.839870  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 12/120
	I0812 12:27:09.841435  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 13/120
	I0812 12:27:10.844210  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 14/120
	I0812 12:27:11.846362  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 15/120
	I0812 12:27:12.847753  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 16/120
	I0812 12:27:13.849327  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 17/120
	I0812 12:27:14.850637  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 18/120
	I0812 12:27:15.851920  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 19/120
	I0812 12:27:16.854317  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 20/120
	I0812 12:27:17.855713  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 21/120
	I0812 12:27:18.857557  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 22/120
	I0812 12:27:19.859630  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 23/120
	I0812 12:27:20.860912  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 24/120
	I0812 12:27:21.863175  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 25/120
	I0812 12:27:22.864796  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 26/120
	I0812 12:27:23.866339  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 27/120
	I0812 12:27:24.868062  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 28/120
	I0812 12:27:25.869800  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 29/120
	I0812 12:27:26.872201  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 30/120
	I0812 12:27:27.873667  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 31/120
	I0812 12:27:28.875396  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 32/120
	I0812 12:27:29.876724  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 33/120
	I0812 12:27:30.878134  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 34/120
	I0812 12:27:31.880418  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 35/120
	I0812 12:27:32.882168  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 36/120
	I0812 12:27:33.883772  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 37/120
	I0812 12:27:34.885294  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 38/120
	I0812 12:27:35.887677  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 39/120
	I0812 12:27:36.889204  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 40/120
	I0812 12:27:37.890805  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 41/120
	I0812 12:27:38.892000  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 42/120
	I0812 12:27:39.894361  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 43/120
	I0812 12:27:40.895912  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 44/120
	I0812 12:27:41.898421  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 45/120
	I0812 12:27:42.899980  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 46/120
	I0812 12:27:43.901544  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 47/120
	I0812 12:27:44.903140  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 48/120
	I0812 12:27:45.904969  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 49/120
	I0812 12:27:46.907733  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 50/120
	I0812 12:27:47.909292  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 51/120
	I0812 12:27:48.910782  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 52/120
	I0812 12:27:49.912170  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 53/120
	I0812 12:27:50.913602  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 54/120
	I0812 12:27:51.915701  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 55/120
	I0812 12:27:52.917375  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 56/120
	I0812 12:27:53.920027  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 57/120
	I0812 12:27:54.922367  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 58/120
	I0812 12:27:55.924601  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 59/120
	I0812 12:27:56.926841  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 60/120
	I0812 12:27:57.928371  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 61/120
	I0812 12:27:58.929778  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 62/120
	I0812 12:27:59.931266  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 63/120
	I0812 12:28:00.932980  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 64/120
	I0812 12:28:01.935154  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 65/120
	I0812 12:28:02.936692  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 66/120
	I0812 12:28:03.938204  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 67/120
	I0812 12:28:04.939917  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 68/120
	I0812 12:28:05.941393  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 69/120
	I0812 12:28:06.943766  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 70/120
	I0812 12:28:07.945434  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 71/120
	I0812 12:28:08.946985  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 72/120
	I0812 12:28:09.948673  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 73/120
	I0812 12:28:10.950642  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 74/120
	I0812 12:28:11.952861  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 75/120
	I0812 12:28:12.955246  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 76/120
	I0812 12:28:13.956825  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 77/120
	I0812 12:28:14.958428  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 78/120
	I0812 12:28:15.959813  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 79/120
	I0812 12:28:16.962927  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 80/120
	I0812 12:28:17.964248  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 81/120
	I0812 12:28:18.965652  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 82/120
	I0812 12:28:19.967731  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 83/120
	I0812 12:28:20.969320  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 84/120
	I0812 12:28:21.971461  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 85/120
	I0812 12:28:22.973630  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 86/120
	I0812 12:28:23.975034  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 87/120
	I0812 12:28:24.976708  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 88/120
	I0812 12:28:25.978191  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 89/120
	I0812 12:28:26.980207  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 90/120
	I0812 12:28:27.981514  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 91/120
	I0812 12:28:28.983631  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 92/120
	I0812 12:28:29.986257  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 93/120
	I0812 12:28:30.988004  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 94/120
	I0812 12:28:31.990201  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 95/120
	I0812 12:28:32.991651  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 96/120
	I0812 12:28:33.993270  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 97/120
	I0812 12:28:34.994977  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 98/120
	I0812 12:28:35.997528  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 99/120
	I0812 12:28:36.999199  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 100/120
	I0812 12:28:38.001324  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 101/120
	I0812 12:28:39.002881  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 102/120
	I0812 12:28:40.005188  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 103/120
	I0812 12:28:41.006599  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 104/120
	I0812 12:28:42.009748  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 105/120
	I0812 12:28:43.011719  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 106/120
	I0812 12:28:44.013225  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 107/120
	I0812 12:28:45.014613  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 108/120
	I0812 12:28:46.016229  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 109/120
	I0812 12:28:47.017769  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 110/120
	I0812 12:28:48.019714  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 111/120
	I0812 12:28:49.021050  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 112/120
	I0812 12:28:50.023226  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 113/120
	I0812 12:28:51.024760  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 114/120
	I0812 12:28:52.026962  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 115/120
	I0812 12:28:53.028367  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 116/120
	I0812 12:28:54.030187  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 117/120
	I0812 12:28:55.031786  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 118/120
	I0812 12:28:56.033525  493845 main.go:141] libmachine: (ha-220134-m04) Waiting for machine to stop 119/120
	I0812 12:28:57.034438  493845 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0812 12:28:57.034499  493845 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0812 12:28:57.036569  493845 out.go:177] 
	W0812 12:28:57.038110  493845 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0812 12:28:57.038129  493845 out.go:239] * 
	* 
	W0812 12:28:57.041689  493845 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 12:28:57.043034  493845 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-220134 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr: exit status 3 (19.059659567s)

                                                
                                                
-- stdout --
	ha-220134
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220134-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:28:57.092006  494304 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:28:57.092238  494304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:28:57.092260  494304 out.go:304] Setting ErrFile to fd 2...
	I0812 12:28:57.092267  494304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:28:57.092462  494304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:28:57.092638  494304 out.go:298] Setting JSON to false
	I0812 12:28:57.092662  494304 mustload.go:65] Loading cluster: ha-220134
	I0812 12:28:57.092719  494304 notify.go:220] Checking for updates...
	I0812 12:28:57.093115  494304 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:28:57.093134  494304 status.go:255] checking status of ha-220134 ...
	I0812 12:28:57.093603  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.093690  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.109797  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0812 12:28:57.110325  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.111070  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.111096  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.111574  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.111859  494304 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:28:57.113455  494304 status.go:330] ha-220134 host status = "Running" (err=<nil>)
	I0812 12:28:57.113476  494304 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:28:57.113772  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.113817  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.129132  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38761
	I0812 12:28:57.129656  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.130174  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.130200  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.130569  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.130805  494304 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:28:57.134107  494304 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:28:57.134332  494304 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:28:57.134370  494304 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:28:57.134529  494304 host.go:66] Checking if "ha-220134" exists ...
	I0812 12:28:57.134899  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.134942  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.150629  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0812 12:28:57.151134  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.151727  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.151754  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.152101  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.152332  494304 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:28:57.152584  494304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:28:57.152625  494304 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:28:57.155907  494304 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:28:57.156387  494304 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:28:57.156443  494304 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:28:57.156628  494304 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:28:57.156845  494304 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:28:57.157024  494304 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:28:57.157197  494304 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:28:57.238136  494304 ssh_runner.go:195] Run: systemctl --version
	I0812 12:28:57.245069  494304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:28:57.263291  494304 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:28:57.263335  494304 api_server.go:166] Checking apiserver status ...
	I0812 12:28:57.263371  494304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:28:57.279027  494304 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5173/cgroup
	W0812 12:28:57.289892  494304 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5173/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:28:57.289956  494304 ssh_runner.go:195] Run: ls
	I0812 12:28:57.296205  494304 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:28:57.302609  494304 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:28:57.302651  494304 status.go:422] ha-220134 apiserver status = Running (err=<nil>)
	I0812 12:28:57.302667  494304 status.go:257] ha-220134 status: &{Name:ha-220134 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:28:57.302693  494304 status.go:255] checking status of ha-220134-m02 ...
	I0812 12:28:57.303009  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.303048  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.318690  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0812 12:28:57.319156  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.319682  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.319701  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.320017  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.320232  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetState
	I0812 12:28:57.322010  494304 status.go:330] ha-220134-m02 host status = "Running" (err=<nil>)
	I0812 12:28:57.322029  494304 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:28:57.322332  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.322370  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.337888  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45839
	I0812 12:28:57.338330  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.338856  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.338879  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.339226  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.339462  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetIP
	I0812 12:28:57.342291  494304 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:28:57.342753  494304 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:24:33 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:28:57.342776  494304 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:28:57.342936  494304 host.go:66] Checking if "ha-220134-m02" exists ...
	I0812 12:28:57.343370  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.343429  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.359051  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0812 12:28:57.359484  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.360044  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.360066  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.360392  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.360567  494304 main.go:141] libmachine: (ha-220134-m02) Calling .DriverName
	I0812 12:28:57.360768  494304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:28:57.360793  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHHostname
	I0812 12:28:57.363874  494304 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:28:57.364401  494304 main.go:141] libmachine: (ha-220134-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:dc:57", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:24:33 +0000 UTC Type:0 Mac:52:54:00:fc:dc:57 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-220134-m02 Clientid:01:52:54:00:fc:dc:57}
	I0812 12:28:57.364438  494304 main.go:141] libmachine: (ha-220134-m02) DBG | domain ha-220134-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:fc:dc:57 in network mk-ha-220134
	I0812 12:28:57.364621  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHPort
	I0812 12:28:57.364817  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHKeyPath
	I0812 12:28:57.365018  494304 main.go:141] libmachine: (ha-220134-m02) Calling .GetSSHUsername
	I0812 12:28:57.365242  494304 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m02/id_rsa Username:docker}
	I0812 12:28:57.455392  494304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:28:57.474815  494304 kubeconfig.go:125] found "ha-220134" server: "https://192.168.39.254:8443"
	I0812 12:28:57.474850  494304 api_server.go:166] Checking apiserver status ...
	I0812 12:28:57.474895  494304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:28:57.490780  494304 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1487/cgroup
	W0812 12:28:57.501029  494304 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1487/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:28:57.501126  494304 ssh_runner.go:195] Run: ls
	I0812 12:28:57.505964  494304 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0812 12:28:57.512609  494304 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0812 12:28:57.512644  494304 status.go:422] ha-220134-m02 apiserver status = Running (err=<nil>)
	I0812 12:28:57.512654  494304 status.go:257] ha-220134-m02 status: &{Name:ha-220134-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:28:57.512669  494304 status.go:255] checking status of ha-220134-m04 ...
	I0812 12:28:57.513093  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.513141  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.529623  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0812 12:28:57.530061  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.530575  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.530601  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.530956  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.531192  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetState
	I0812 12:28:57.532916  494304 status.go:330] ha-220134-m04 host status = "Running" (err=<nil>)
	I0812 12:28:57.532937  494304 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:28:57.533251  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.533296  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.549234  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0812 12:28:57.549692  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.550185  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.550210  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.550568  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.550790  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetIP
	I0812 12:28:57.553806  494304 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:28:57.554292  494304 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:26:24 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:28:57.554332  494304 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:28:57.554422  494304 host.go:66] Checking if "ha-220134-m04" exists ...
	I0812 12:28:57.554728  494304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:28:57.554773  494304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:28:57.569815  494304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0812 12:28:57.570180  494304 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:28:57.570548  494304 main.go:141] libmachine: Using API Version  1
	I0812 12:28:57.570568  494304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:28:57.570888  494304 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:28:57.571066  494304 main.go:141] libmachine: (ha-220134-m04) Calling .DriverName
	I0812 12:28:57.571236  494304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:28:57.571260  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHHostname
	I0812 12:28:57.573951  494304 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:28:57.574332  494304 main.go:141] libmachine: (ha-220134-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:6c:80", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:26:24 +0000 UTC Type:0 Mac:52:54:00:c7:6c:80 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-220134-m04 Clientid:01:52:54:00:c7:6c:80}
	I0812 12:28:57.574372  494304 main.go:141] libmachine: (ha-220134-m04) DBG | domain ha-220134-m04 has defined IP address 192.168.39.39 and MAC address 52:54:00:c7:6c:80 in network mk-ha-220134
	I0812 12:28:57.574501  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHPort
	I0812 12:28:57.574670  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHKeyPath
	I0812 12:28:57.574814  494304 main.go:141] libmachine: (ha-220134-m04) Calling .GetSSHUsername
	I0812 12:28:57.574945  494304 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134-m04/id_rsa Username:docker}
	W0812 12:29:16.105350  494304 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0812 12:29:16.105472  494304 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0812 12:29:16.105489  494304 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0812 12:29:16.105496  494304 status.go:257] ha-220134-m04 status: &{Name:ha-220134-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0812 12:29:16.105518  494304 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220134 -n ha-220134
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220134 logs -n 25: (1.730184196s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m04 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp testdata/cp-test.txt                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134:/home/docker/cp-test_ha-220134-m04_ha-220134.txt                      |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134 sudo cat                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134.txt                                |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m02:/home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m02 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m03:/home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n                                                                | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | ha-220134-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-220134 ssh -n ha-220134-m03 sudo cat                                         | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC | 12 Aug 24 12:17 UTC |
	|         | /home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-220134 node stop m02 -v=7                                                    | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-220134 node start m02 -v=7                                                   | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:19 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-220134 -v=7                                                          | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-220134 -v=7                                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-220134 --wait=true -v=7                                                   | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:22 UTC | 12 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-220134                                                               | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:26 UTC |                     |
	| node    | ha-220134 node delete m03 -v=7                                                  | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:26 UTC | 12 Aug 24 12:26 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-220134 stop -v=7                                                             | ha-220134 | jenkins | v1.33.1 | 12 Aug 24 12:26 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:22:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:22:47.577579  492160 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:22:47.577697  492160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:22:47.577706  492160 out.go:304] Setting ErrFile to fd 2...
	I0812 12:22:47.577711  492160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:22:47.577881  492160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:22:47.578421  492160 out.go:298] Setting JSON to false
	I0812 12:22:47.579508  492160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14699,"bootTime":1723450669,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:22:47.579578  492160 start.go:139] virtualization: kvm guest
	I0812 12:22:47.581885  492160 out.go:177] * [ha-220134] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:22:47.583618  492160 notify.go:220] Checking for updates...
	I0812 12:22:47.583635  492160 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:22:47.585293  492160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:22:47.586843  492160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:22:47.588201  492160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:22:47.589464  492160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:22:47.590868  492160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:22:47.592783  492160 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:22:47.592927  492160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:22:47.593417  492160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:22:47.593478  492160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:22:47.609584  492160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0812 12:22:47.610160  492160 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:22:47.610807  492160 main.go:141] libmachine: Using API Version  1
	I0812 12:22:47.610834  492160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:22:47.611182  492160 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:22:47.611359  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.650048  492160 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 12:22:47.651406  492160 start.go:297] selected driver: kvm2
	I0812 12:22:47.651425  492160 start.go:901] validating driver "kvm2" against &{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:22:47.651648  492160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:22:47.652099  492160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:22:47.652194  492160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:22:47.668087  492160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:22:47.668835  492160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:22:47.668925  492160 cni.go:84] Creating CNI manager for ""
	I0812 12:22:47.668940  492160 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 12:22:47.669026  492160 start.go:340] cluster config:
	{Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:22:47.669262  492160 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:22:47.671146  492160 out.go:177] * Starting "ha-220134" primary control-plane node in "ha-220134" cluster
	I0812 12:22:47.672616  492160 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:22:47.672663  492160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:22:47.672685  492160 cache.go:56] Caching tarball of preloaded images
	I0812 12:22:47.672802  492160 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:22:47.672817  492160 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:22:47.673008  492160 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/config.json ...
	I0812 12:22:47.673269  492160 start.go:360] acquireMachinesLock for ha-220134: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:22:47.673318  492160 start.go:364] duration metric: took 27.094µs to acquireMachinesLock for "ha-220134"
	I0812 12:22:47.673338  492160 start.go:96] Skipping create...Using existing machine configuration
	I0812 12:22:47.673349  492160 fix.go:54] fixHost starting: 
	I0812 12:22:47.673656  492160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:22:47.673694  492160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:22:47.688733  492160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0812 12:22:47.689225  492160 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:22:47.689686  492160 main.go:141] libmachine: Using API Version  1
	I0812 12:22:47.689705  492160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:22:47.690066  492160 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:22:47.690281  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.690492  492160 main.go:141] libmachine: (ha-220134) Calling .GetState
	I0812 12:22:47.692011  492160 fix.go:112] recreateIfNeeded on ha-220134: state=Running err=<nil>
	W0812 12:22:47.692047  492160 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 12:22:47.694084  492160 out.go:177] * Updating the running kvm2 "ha-220134" VM ...
	I0812 12:22:47.695575  492160 machine.go:94] provisionDockerMachine start ...
	I0812 12:22:47.695603  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:22:47.695891  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.698639  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.699128  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.699159  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.699303  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.699526  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.699722  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.699862  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.700029  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.700264  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.700282  492160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 12:22:47.806711  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:22:47.806746  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:47.807039  492160 buildroot.go:166] provisioning hostname "ha-220134"
	I0812 12:22:47.807072  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:47.807291  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.810186  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.810609  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.810642  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.810822  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.811033  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.811201  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.811358  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.811523  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.811724  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.811739  492160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220134 && echo "ha-220134" | sudo tee /etc/hostname
	I0812 12:22:47.929823  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220134
	
	I0812 12:22:47.929864  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:47.932830  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.933395  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:47.933426  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:47.933597  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:47.933809  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.933961  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:47.934075  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:47.934240  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:47.934447  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:47.934468  492160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220134/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:22:48.038517  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:22:48.038549  492160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:22:48.038597  492160 buildroot.go:174] setting up certificates
	I0812 12:22:48.038609  492160 provision.go:84] configureAuth start
	I0812 12:22:48.038621  492160 main.go:141] libmachine: (ha-220134) Calling .GetMachineName
	I0812 12:22:48.038921  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:22:48.041886  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.042253  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.042276  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.042500  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.044897  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.045352  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.045392  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.045519  492160 provision.go:143] copyHostCerts
	I0812 12:22:48.045554  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:22:48.045615  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:22:48.045627  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:22:48.045710  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:22:48.045834  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:22:48.045863  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:22:48.045872  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:22:48.045914  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:22:48.045990  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:22:48.046015  492160 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:22:48.046032  492160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:22:48.046069  492160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:22:48.046154  492160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.ha-220134 san=[127.0.0.1 192.168.39.228 ha-220134 localhost minikube]
	I0812 12:22:48.409906  492160 provision.go:177] copyRemoteCerts
	I0812 12:22:48.410000  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:22:48.410035  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.413269  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.413768  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.413806  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.413972  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:48.414212  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.414401  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:48.414536  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:22:48.497161  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:22:48.497243  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:22:48.525612  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:22:48.525767  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:22:48.553052  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:22:48.553154  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0812 12:22:48.578935  492160 provision.go:87] duration metric: took 540.309638ms to configureAuth
	I0812 12:22:48.578971  492160 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:22:48.579236  492160 config.go:182] Loaded profile config "ha-220134": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:22:48.579334  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:22:48.582106  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.582595  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:22:48.582631  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:22:48.582756  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:22:48.582969  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.583143  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:22:48.583306  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:22:48.583475  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:22:48.583690  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:22:48.583713  492160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:24:19.441316  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:24:19.441357  492160 machine.go:97] duration metric: took 1m31.745762394s to provisionDockerMachine
	I0812 12:24:19.441374  492160 start.go:293] postStartSetup for "ha-220134" (driver="kvm2")
	I0812 12:24:19.441395  492160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:24:19.441422  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.441852  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:24:19.441890  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.445403  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.445945  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.445969  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.446128  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.446374  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.446571  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.446734  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.528994  492160 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:24:19.533473  492160 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:24:19.533504  492160 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:24:19.533583  492160 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:24:19.533686  492160 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:24:19.533700  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:24:19.533830  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:24:19.544000  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:24:19.568869  492160 start.go:296] duration metric: took 127.477266ms for postStartSetup
	I0812 12:24:19.568922  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.569260  492160 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0812 12:24:19.569293  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.572177  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.572646  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.572676  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.572837  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.573032  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.573244  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.573409  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	W0812 12:24:19.651288  492160 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0812 12:24:19.651315  492160 fix.go:56] duration metric: took 1m31.977968081s for fixHost
	I0812 12:24:19.651339  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.654426  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.654827  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.654853  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.654990  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.655193  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.655335  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.655446  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.655648  492160 main.go:141] libmachine: Using SSH client type: native
	I0812 12:24:19.655868  492160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0812 12:24:19.655880  492160 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:24:19.758356  492160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723465459.723555354
	
	I0812 12:24:19.758386  492160 fix.go:216] guest clock: 1723465459.723555354
	I0812 12:24:19.758396  492160 fix.go:229] Guest: 2024-08-12 12:24:19.723555354 +0000 UTC Remote: 2024-08-12 12:24:19.651322372 +0000 UTC m=+92.113335850 (delta=72.232982ms)
	I0812 12:24:19.758427  492160 fix.go:200] guest clock delta is within tolerance: 72.232982ms
	I0812 12:24:19.758445  492160 start.go:83] releasing machines lock for "ha-220134", held for 1m32.085108085s
	I0812 12:24:19.758478  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.758780  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:24:19.761524  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.761939  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.761967  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.762132  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762675  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762904  492160 main.go:141] libmachine: (ha-220134) Calling .DriverName
	I0812 12:24:19.762993  492160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:24:19.763037  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.763164  492160 ssh_runner.go:195] Run: cat /version.json
	I0812 12:24:19.763193  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHHostname
	I0812 12:24:19.765751  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766007  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766153  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.766181  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766349  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.766440  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:19.766467  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:19.766539  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.766659  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHPort
	I0812 12:24:19.766753  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.766843  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHKeyPath
	I0812 12:24:19.766894  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.766961  492160 main.go:141] libmachine: (ha-220134) Calling .GetSSHUsername
	I0812 12:24:19.767132  492160 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/ha-220134/id_rsa Username:docker}
	I0812 12:24:19.861425  492160 ssh_runner.go:195] Run: systemctl --version
	I0812 12:24:19.867724  492160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:24:20.027344  492160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 12:24:20.035531  492160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:24:20.035621  492160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:24:20.044904  492160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 12:24:20.044932  492160 start.go:495] detecting cgroup driver to use...
	I0812 12:24:20.044998  492160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:24:20.060670  492160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:24:20.074880  492160 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:24:20.074956  492160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:24:20.088592  492160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:24:20.103020  492160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:24:20.255655  492160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:24:20.400241  492160 docker.go:233] disabling docker service ...
	I0812 12:24:20.400332  492160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:24:20.416652  492160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:24:20.430546  492160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:24:20.577347  492160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:24:20.724552  492160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:24:20.738895  492160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:24:20.760004  492160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:24:20.760090  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.771013  492160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:24:20.771107  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.783866  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.795411  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.806539  492160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:24:20.819040  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.830381  492160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.844202  492160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:24:20.855431  492160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:24:20.865796  492160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:24:20.876375  492160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:24:21.041154  492160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:24:21.380071  492160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:24:21.380159  492160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:24:21.385847  492160 start.go:563] Will wait 60s for crictl version
	I0812 12:24:21.385922  492160 ssh_runner.go:195] Run: which crictl
	I0812 12:24:21.389853  492160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:24:21.427848  492160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:24:21.427949  492160 ssh_runner.go:195] Run: crio --version
	I0812 12:24:21.457881  492160 ssh_runner.go:195] Run: crio --version
	I0812 12:24:21.488479  492160 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:24:21.489996  492160 main.go:141] libmachine: (ha-220134) Calling .GetIP
	I0812 12:24:21.492937  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:21.493354  492160 main.go:141] libmachine: (ha-220134) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:2e:31", ip: ""} in network mk-ha-220134: {Iface:virbr1 ExpiryTime:2024-08-12 13:11:47 +0000 UTC Type:0 Mac:52:54:00:91:2e:31 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-220134 Clientid:01:52:54:00:91:2e:31}
	I0812 12:24:21.493381  492160 main.go:141] libmachine: (ha-220134) DBG | domain ha-220134 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:2e:31 in network mk-ha-220134
	I0812 12:24:21.493629  492160 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:24:21.498630  492160 kubeadm.go:883] updating cluster {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:24:21.498784  492160 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:24:21.498836  492160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:24:21.544751  492160 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:24:21.544779  492160 crio.go:433] Images already preloaded, skipping extraction
	I0812 12:24:21.544835  492160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:24:21.578970  492160 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:24:21.579001  492160 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:24:21.579012  492160 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.30.3 crio true true} ...
	I0812 12:24:21.579136  492160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:24:21.579212  492160 ssh_runner.go:195] Run: crio config
	I0812 12:24:21.632266  492160 cni.go:84] Creating CNI manager for ""
	I0812 12:24:21.632297  492160 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0812 12:24:21.632317  492160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:24:21.632355  492160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220134 NodeName:ha-220134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:24:21.632499  492160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220134"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:24:21.632522  492160 kube-vip.go:115] generating kube-vip config ...
	I0812 12:24:21.632583  492160 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0812 12:24:21.645137  492160 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0812 12:24:21.645258  492160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0812 12:24:21.645324  492160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:24:21.654802  492160 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:24:21.654874  492160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0812 12:24:21.664328  492160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0812 12:24:21.680594  492160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:24:21.696758  492160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0812 12:24:21.713188  492160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0812 12:24:21.730954  492160 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0812 12:24:21.735833  492160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:24:21.883350  492160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:24:21.898343  492160 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134 for IP: 192.168.39.228
	I0812 12:24:21.898374  492160 certs.go:194] generating shared ca certs ...
	I0812 12:24:21.898395  492160 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.898591  492160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:24:21.898651  492160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:24:21.898664  492160 certs.go:256] generating profile certs ...
	I0812 12:24:21.898766  492160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/client.key
	I0812 12:24:21.898801  492160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa
	I0812 12:24:21.898832  492160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228 192.168.39.215 192.168.39.186 192.168.39.254]
	I0812 12:24:21.968565  492160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa ...
	I0812 12:24:21.968600  492160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa: {Name:mk7f492d864eb7efe6c3a76c18877669259706b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.968808  492160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa ...
	I0812 12:24:21.968828  492160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa: {Name:mk977dc6aa6dfea27e78b42a178ab60052c7c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:24:21.968925  492160 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt.0d9462fa -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt
	I0812 12:24:21.969131  492160 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key.0d9462fa -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key
	I0812 12:24:21.969325  492160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key
	I0812 12:24:21.969346  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:24:21.969393  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:24:21.969413  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:24:21.969431  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:24:21.969444  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:24:21.969469  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:24:21.969484  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:24:21.969501  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:24:21.969570  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:24:21.969611  492160 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:24:21.969625  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:24:21.969655  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:24:21.969686  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:24:21.969715  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:24:21.969774  492160 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:24:21.969822  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:24:21.969843  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:21.969866  492160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:24:21.970471  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:24:21.996863  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:24:22.022866  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:24:22.049041  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:24:22.080685  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0812 12:24:22.105639  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 12:24:22.129654  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:24:22.153960  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/ha-220134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 12:24:22.177868  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:24:22.202086  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:24:22.227429  492160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:24:22.252771  492160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:24:22.269851  492160 ssh_runner.go:195] Run: openssl version
	I0812 12:24:22.275726  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:24:22.286202  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.290576  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.290649  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:24:22.296152  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:24:22.305633  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:24:22.316559  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.321344  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.321413  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:24:22.327667  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:24:22.338408  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:24:22.349998  492160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.354775  492160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.354848  492160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:24:22.360709  492160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:24:22.370584  492160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:24:22.375445  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 12:24:22.381375  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 12:24:22.387243  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 12:24:22.392999  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 12:24:22.398995  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 12:24:22.404530  492160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 12:24:22.410194  492160 kubeadm.go:392] StartCluster: {Name:ha-220134 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-220134 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.39 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:24:22.410318  492160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:24:22.410378  492160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:24:22.446529  492160 cri.go:89] found id: "735980f5d449557f871b4e20c9f7dfdf9956d199a5945569a2f1c83aae8bdd3a"
	I0812 12:24:22.446561  492160 cri.go:89] found id: "dbbecf0729bcf7a0b2025d25cc36fb9931f9ffe13a952e3da5528edc643af2ac"
	I0812 12:24:22.446566  492160 cri.go:89] found id: "7081d52c1eb4ab20d6c5b56c16344435565fcebc6d1995e156a8b868152a1a2c"
	I0812 12:24:22.446569  492160 cri.go:89] found id: "58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf"
	I0812 12:24:22.446571  492160 cri.go:89] found id: "d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691"
	I0812 12:24:22.446575  492160 cri.go:89] found id: "ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa"
	I0812 12:24:22.446577  492160 cri.go:89] found id: "43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e"
	I0812 12:24:22.446580  492160 cri.go:89] found id: "4c2431108a96b909a72f34d8a50c0871850e86ac11304727ce68d3b0ee757bc8"
	I0812 12:24:22.446582  492160 cri.go:89] found id: "61f57a70138eb6a5793f4aad51b198badab8d77df8d3377d783053cc30d209c4"
	I0812 12:24:22.446588  492160 cri.go:89] found id: "3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99"
	I0812 12:24:22.446603  492160 cri.go:89] found id: "d80fece0b2b4c6f139f27d8c934537167c09359addc6847771b75e37836b89b9"
	I0812 12:24:22.446606  492160 cri.go:89] found id: "e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768"
	I0812 12:24:22.446609  492160 cri.go:89] found id: ""
	I0812 12:24:22.446652  492160 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.748903523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465756748879691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b258770-b483-445d-8acd-31125407241e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.749510960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b340d065-b6e3-4f6c-bc7e-7c6ab675114f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.749564833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b340d065-b6e3-4f6c-bc7e-7c6ab675114f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.749979178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4981d3dd610b59b1d8584a76e3b42351b330074aa453fec4b4f73cf25bba7cc,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723465649286945908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name
\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0
b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annota
tions:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kuber
netes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b340d065-b6e3-4f6c-bc7e-7c6ab675114f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.797326995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb0d6238-f2b1-4309-bb7f-b0f59de72f21 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.797419428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb0d6238-f2b1-4309-bb7f-b0f59de72f21 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.798896302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82557376-b614-4e65-9a86-e92dd3f8a918 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.799538598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465756799503302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82557376-b614-4e65-9a86-e92dd3f8a918 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.800406480Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=156f151d-eb8c-4f43-b3cf-34301652b9b6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.800480570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=156f151d-eb8c-4f43-b3cf-34301652b9b6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.800883559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4981d3dd610b59b1d8584a76e3b42351b330074aa453fec4b4f73cf25bba7cc,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723465649286945908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name
\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0
b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annota
tions:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kuber
netes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=156f151d-eb8c-4f43-b3cf-34301652b9b6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.847812165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ef65a1c-d9b1-4fcc-b4a7-1cb959e6f72b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.847910543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ef65a1c-d9b1-4fcc-b4a7-1cb959e6f72b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.848925734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22c75981-51b5-45e9-a15c-151c9978b1ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.849521130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465756849496729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22c75981-51b5-45e9-a15c-151c9978b1ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.850378558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64bf34f-2a46-43b3-a4be-e4114242c1f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.850492436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64bf34f-2a46-43b3-a4be-e4114242c1f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.850995160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4981d3dd610b59b1d8584a76e3b42351b330074aa453fec4b4f73cf25bba7cc,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723465649286945908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name
\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0
b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annota
tions:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kuber
netes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64bf34f-2a46-43b3-a4be-e4114242c1f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.895053323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c414a2aa-e0bd-4a6e-8655-b03fa46dbd6c name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.895129996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c414a2aa-e0bd-4a6e-8655-b03fa46dbd6c name=/runtime.v1.RuntimeService/Version
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.896443107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8e4c8b6-2e43-4eae-8555-2151df326d9a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.896920766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723465756896891042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8e4c8b6-2e43-4eae-8555-2151df326d9a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.897569164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7c85dee-6ccb-49e9-bc6e-285888b02747 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.897676138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7c85dee-6ccb-49e9-bc6e-285888b02747 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:29:16 ha-220134 crio[3945]: time="2024-08-12 12:29:16.898124365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4981d3dd610b59b1d8584a76e3b42351b330074aa453fec4b4f73cf25bba7cc,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723465649286945908,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723465504291817730,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Annotations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec85593ac4c79e858980eb6b539878009f9efa4d77c5eee85dac9a1e8d00bacf,PodSandboxId:a4a6b470c70abbdb6da84f021f702f303eca344e5d0d680d8da1a6e60c57ffa8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723465498594631412,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annotations:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723465498059647020,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dcc1f027fb3489e24893b124dcd6f666ab12d628a9e12c6b7b14d26b2422e1,PodSandboxId:d3e2acfe2b290d3680d71b917761954d5f7015f0e457131146f1c9e60eaf556b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723465476206933423,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b30f96105671a4e343866852be27970,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4af4cc0977fc94b5242fe9716820beff531853265cf674bd6bb4d63c37a57,PodSandboxId:5646a54fc7ad26b17f2c619720f5475fdda04b52ca13971023a5f59ee702bcf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723465465716467647,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca65bc5-3ba1-44be-8606-f8235cf9b3d0,},Annotations:map[string]string{io.kubernetes.container.hash: d7535719,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9,PodSandboxId:167da13cfce58f450f6d5419b48f6e6fcee683cff89014514008d521a012143a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465637168679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945,PodSandboxId:3a275d5d9110e0ff828fb5d04b20b5a5ff34bdfc046af947c84be8ea47ae588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723465465481126585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11,PodSandboxId:e04de298a51b8c5ef826df91df0488946ed8237fd5314f46f0f9248c1e63b10b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723465465490127358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name
\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56,PodSandboxId:75fa020cbc0eafb504497dce8a30b5619d565bbe83b466272092ec4e8faf6daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723465465327533341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e,PodSandboxId:71fc1c2740167ca33e2e9efb8e6a53e08d6d6b1b54e93bb8a51f5f67b1f89799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723465465260060009,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220134,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: d348dbaa84a96f978a599972e582878c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba,PodSandboxId:94d077c7674498f50473cf7a3fbcdf6ee8adf63214dad294f4575bed128d4486,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723465465210026697,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74,PodSandboxId:f1664d03896ffe1f92174863c18ffa4b74a289a69b307208b29ebd71eb6bf764,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723465465127131514,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0
b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e,PodSandboxId:6569ef6537c27e381aa3bb100b84e5063dac6af186f584ffc3b114a2bd10b53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723465465016740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0925dae6628595ef369e55476b766bf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3ea743b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5e5f2f3e8c959ebd1abeff358ae9ebf36578f80df8e698545f6f03f1dc003c,PodSandboxId:d0ae8920356aabaed300935b0fde9cadc9c06ffbd79a32f3d6877df57ffac6fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723464968121103506,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qh8vv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31a40d8d-51b3-476c-a261-e4958fa5001a,},Annota
tions:map[string]string{io.kubernetes.container.hash: fff3458c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf,PodSandboxId:2c5c191b44764c3f0484222456717418b01cef215777efee66d9182532336de6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763046948052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-t8pg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 219c5cf3-19e1-40fc-98c8-9c2d2a800b7b,},Annotations:map[string]string{io.kuber
netes.container.hash: d0d6257,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691,PodSandboxId:c1f343a193477712e73ad4b868e654d4f62b50f4d314b57be5dd522060d9ad42,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723464763003676529,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mtqtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be769ca5-c3cd-4682-96f3-6244b5e1cadb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d7a5523,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa,PodSandboxId:6bb5cf25bace535baa1ecfd1130c66200e2f2f63f70d0c9146117f0310ee5cb2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723464750926137070,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mh4sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd619441-cf92-4026-98ef-0f50d4bfc470,},Annotations:map[string]string{io.kubernetes.container.hash: 86e9a7f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e,PodSandboxId:d3f2e966dc4ecb346f3b47572bb108d6e88e7eccd4998da15a57b84d872d0158,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723464746717607648,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zcgh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c5f53-1764-43b6-a140-2fec3819210d,},Annotations:map[string]string{io.kubernetes.container.hash: 1f23b229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99,PodSandboxId:e773728876a094b2b8ecc71491feaa4ef9f4cecb6b86c39bebdc4cbfd27d666f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723464726177864361,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d48521a4f0ff7e835626ad8a41bcd761,},Annotations:map[string]string{io.kubernetes.container.hash: bd20d792,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768,PodSandboxId:36c1552f9acffd36e27aa15da482b1884a197cdd6365a0649d4bfbc2d03c991f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723464726065655161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220134,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8440dcd3de63dd3f0b314aca28c58e50,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7c85dee-6ccb-49e9-bc6e-285888b02747 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4981d3dd610b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       6                   5646a54fc7ad2       storage-provisioner
	3f8928354b6b4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago        Running             kube-apiserver            3                   6569ef6537c27       kube-apiserver-ha-220134
	ec85593ac4c79       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago        Running             busybox                   1                   a4a6b470c70ab       busybox-fc5497c4f-qh8vv
	af4d1d3b9cb94       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago        Running             kube-controller-manager   2                   71fc1c2740167       kube-controller-manager-ha-220134
	c5dcc1f027fb3       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago        Running             kube-vip                  0                   d3e2acfe2b290       kube-vip-ha-220134
	9ab4af4cc0977       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago        Exited              storage-provisioner       5                   5646a54fc7ad2       storage-provisioner
	3db6a6687be27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   167da13cfce58       coredns-7db6d8ff4d-t8pg7
	c0e5686d9b943       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   e04de298a51b8       coredns-7db6d8ff4d-mtqtk
	bf25bb773e691       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago        Running             kindnet-cni               1                   3a275d5d9110e       kindnet-mh4sv
	aa48279fba2d2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago        Running             kube-proxy                1                   75fa020cbc0ea       kube-proxy-zcgh8
	14dbe639118d7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago        Exited              kube-controller-manager   1                   71fc1c2740167       kube-controller-manager-ha-220134
	6509beb3fc8e0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago        Running             etcd                      1                   94d077c767449       etcd-ha-220134
	d63c5a66780a6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago        Running             kube-scheduler            1                   f1664d03896ff       kube-scheduler-ha-220134
	f2a17dd84b58b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago        Exited              kube-apiserver            2                   6569ef6537c27       kube-apiserver-ha-220134
	fd5e5f2f3e8c9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago       Exited              busybox                   0                   d0ae8920356aa       busybox-fc5497c4f-qh8vv
	58c1b0454a4f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago       Exited              coredns                   0                   2c5c191b44764       coredns-7db6d8ff4d-t8pg7
	d6bc464a808be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago       Exited              coredns                   0                   c1f343a193477       coredns-7db6d8ff4d-mtqtk
	ec1c98b0147f2       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    16 minutes ago       Exited              kindnet-cni               0                   6bb5cf25bace5       kindnet-mh4sv
	43dd48710573d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago       Exited              kube-proxy                0                   d3f2e966dc4ec       kube-proxy-zcgh8
	3b386f478bcd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago       Exited              etcd                      0                   e773728876a09       etcd-ha-220134
	e302617a6e799       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago       Exited              kube-scheduler            0                   36c1552f9acff       kube-scheduler-ha-220134
	
	
	==> coredns [3db6a6687be277f31a6fab6cd01ff524eb0ed3ce1f28200db0f83ad6360403b9] <==
	[INFO] plugin/kubernetes: Trace[802006662]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 12:24:30.586) (total time: 10000ms):
	Trace[802006662]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (12:24:40.587)
	Trace[802006662]: [10.000973733s] [10.000973733s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [58c1b0454a4f76eadfb28f04c44cc04085f91a613a0d5a0e02a1626785a7f0cf] <==
	[INFO] 10.244.0.4:43198 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083609s
	[INFO] 10.244.2.2:44558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149393s
	[INFO] 10.244.2.2:54267 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000289357s
	[INFO] 10.244.2.2:36401 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192313s
	[INFO] 10.244.2.2:47805 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012737375s
	[INFO] 10.244.2.2:52660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213917s
	[INFO] 10.244.2.2:56721 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019118s
	[INFO] 10.244.1.2:46713 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180271s
	[INFO] 10.244.1.2:45630 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117989s
	[INFO] 10.244.1.2:36911 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001707s
	[INFO] 10.244.2.2:55073 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.2.2:37969 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010618s
	[INFO] 10.244.1.2:57685 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000225366s
	[INFO] 10.244.1.2:52755 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103176s
	[INFO] 10.244.0.4:52936 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131913s
	[INFO] 10.244.0.4:57415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055098s
	[INFO] 10.244.2.2:48523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000363461s
	[INFO] 10.244.1.2:41861 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150101s
	[INFO] 10.244.0.4:60137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147895s
	[INFO] 10.244.0.4:46681 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070169s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c0e5686d9b9433311966948f8416e798d189dbe2c74513b5a28dc2f44990ef11] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1185666084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (12-Aug-2024 12:24:30.237) (total time: 10002ms):
	Trace[1185666084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:24:40.239)
	Trace[1185666084]: [10.002143427s] [10.002143427s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40864->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38224->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:38224->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d6bc464a808be227d086144efa9e4776a595034a7df2cac97d9e24507cc3e691] <==
	[INFO] 10.244.0.4:52443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019087s
	[INFO] 10.244.0.4:57191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001115s
	[INFO] 10.244.0.4:36774 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001249129s
	[INFO] 10.244.0.4:36176 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018293s
	[INFO] 10.244.0.4:52138 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073249s
	[INFO] 10.244.0.4:52765 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054999s
	[INFO] 10.244.2.2:35368 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110859s
	[INFO] 10.244.2.2:55727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119256s
	[INFO] 10.244.1.2:45598 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120462s
	[INFO] 10.244.1.2:57257 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000297797s
	[INFO] 10.244.0.4:48236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152091s
	[INFO] 10.244.0.4:40466 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098727s
	[INFO] 10.244.2.2:37067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001712s
	[INFO] 10.244.2.2:54242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014178s
	[INFO] 10.244.2.2:41816 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019482s
	[INFO] 10.244.1.2:42291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335455s
	[INFO] 10.244.1.2:33492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078001s
	[INFO] 10.244.1.2:52208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005886s
	[INFO] 10.244.0.4:55618 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005463s
	[INFO] 10.244.0.4:59573 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079101s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-220134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_12_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:12:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:29:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:28:00 +0000   Mon, 12 Aug 2024 12:28:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:28:00 +0000   Mon, 12 Aug 2024 12:28:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:28:00 +0000   Mon, 12 Aug 2024 12:28:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:28:00 +0000   Mon, 12 Aug 2024 12:28:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-220134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b36c448dca9a4512802dabd6b631307b
	  System UUID:                b36c448d-ca9a-4512-802d-abd6b631307b
	  Boot ID:                    b1858840-6bc1-4ad6-872f-13825f26f2e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qh8vv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-mtqtk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-t8pg7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-220134                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-mh4sv                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-220134             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-220134    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-zcgh8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-220134             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-220134                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 4m10s                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Warning  ContainerGCFailed        5m5s (x2 over 6m5s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m59s                node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           3m58s                node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   RegisteredNode           3m10s                node-controller  Node ha-220134 event: Registered Node ha-220134 in Controller
	  Normal   NodeNotReady             104s                 node-controller  Node ha-220134 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     77s (x2 over 17m)    kubelet          Node ha-220134 status is now: NodeHasSufficientPID
	  Normal   NodeReady                77s (x2 over 16m)    kubelet          Node ha-220134 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    77s (x2 over 17m)    kubelet          Node ha-220134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  77s (x2 over 17m)    kubelet          Node ha-220134 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-220134-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_14_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:14:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:29:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:25:47 +0000   Mon, 12 Aug 2024 12:25:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-220134-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ab5f23e5e3d4308ad21378e16e05f36
	  System UUID:                5ab5f23e-5e3d-4308-ad21-378e16e05f36
	  Boot ID:                    4b12cc87-77d4-4a02-89b2-18398058ad76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9hhl4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-220134-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-52flt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-220134-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-220134-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bs72f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-220134-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-220134-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-220134-m02 status is now: NodeNotReady
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s (x8 over 4m31s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node ha-220134-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node ha-220134-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-220134-m02 event: Registered Node ha-220134-m02 in Controller
	
	
	Name:               ha-220134-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220134-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=ha-220134
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_16_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:16:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220134-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:26:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:27:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:27:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:27:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 12:26:29 +0000   Mon, 12 Aug 2024 12:27:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-220134-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 faa5c8215a114c109397b8051f5bfb12
	  System UUID:                faa5c821-5a11-4c10-9397-b8051f5bfb12
	  Boot ID:                    8f925e92-9813-4393-b910-64fefb6efe12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-899g7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-zcp4c              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-s6pvf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-220134-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           3m58s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-220134-m04 event: Registered Node ha-220134-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-220134-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-220134-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-220134-m04 has been rebooted, boot id: 8f925e92-9813-4393-b910-64fefb6efe12
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-220134-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s (x2 over 3m19s)   node-controller  Node ha-220134-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057678] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.199862] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.121638] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.281974] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.332937] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060566] kauditd_printk_skb: 130 callbacks suppressed
	[Aug12 12:12] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.913038] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.066004] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.086767] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.012156] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.877478] kauditd_printk_skb: 29 callbacks suppressed
	[Aug12 12:14] kauditd_printk_skb: 26 callbacks suppressed
	[Aug12 12:21] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 12:24] systemd-fstab-generator[3847]: Ignoring "noauto" option for root device
	[  +0.089398] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060948] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +0.172461] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.153298] systemd-fstab-generator[3885]: Ignoring "noauto" option for root device
	[  +0.291344] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.859510] systemd-fstab-generator[4035]: Ignoring "noauto" option for root device
	[  +3.526818] kauditd_printk_skb: 171 callbacks suppressed
	[ +10.883583] kauditd_printk_skb: 35 callbacks suppressed
	[ +11.172198] kauditd_printk_skb: 1 callbacks suppressed
	[Aug12 12:25] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [3b386f478bcd33468fb660c885f5e379ee85f9a03a04b04a8f52e0c1b1e3cd99] <==
	{"level":"warn","ts":"2024-08-12T12:22:48.734946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-12T12:22:48.260876Z","time spent":"474.045618ms","remote":"127.0.0.1:58030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/08/12 12:22:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T12:22:48.734231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.245581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T12:22:48.735663Z","caller":"traceutil/trace.go:171","msg":"trace[1410345890] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; }","duration":"281.037013ms","start":"2024-08-12T12:22:48.454619Z","end":"2024-08-12T12:22:48.735656Z","steps":["trace[1410345890] 'agreement among raft nodes before linearized reading'  (duration: 268.236373ms)"],"step_count":1}
	2024/08/12 12:22:48 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-12T12:22:48.996244Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.228:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:22:48.996411Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.228:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T12:22:48.997987Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"19024f543fef3d0c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-12T12:22:48.998148Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998165Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998194Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998251Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998343Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.99847Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998515Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"40a2120568119bf3"}
	{"level":"info","ts":"2024-08-12T12:22:48.998525Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.99855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998662Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.99869Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:48.998729Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:22:49.00282Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-08-12T12:22:49.002959Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-08-12T12:22:49.00297Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-220134","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.228:2380"],"advertise-client-urls":["https://192.168.39.228:2379"]}
	
	
	==> etcd [6509beb3fc8e0b6cbf2300b6580fa23851455b33acda1d85a30964d026b08aba] <==
	{"level":"info","ts":"2024-08-12T12:25:50.235753Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"19024f543fef3d0c","to":"21d78cb68f18ad2f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-12T12:25:50.235819Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:50.250707Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"19024f543fef3d0c","to":"21d78cb68f18ad2f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-12T12:25:50.250834Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:25:58.390591Z","caller":"traceutil/trace.go:171","msg":"trace[459603598] transaction","detail":"{read_only:false; response_revision:2454; number_of_response:1; }","duration":"108.005121ms","start":"2024-08-12T12:25:58.282556Z","end":"2024-08-12T12:25:58.390561Z","steps":["trace[459603598] 'process raft request'  (duration: 104.50283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:26:43.124374Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.186:48324","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-12T12:26:43.167946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c switched to configuration voters=(1802090024170110220 4657304779084635123)"}
	{"level":"info","ts":"2024-08-12T12:26:43.170088Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","removed-remote-peer-id":"21d78cb68f18ad2f","removed-remote-peer-urls":["https://192.168.39.186:2380"]}
	{"level":"info","ts":"2024-08-12T12:26:43.170201Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.170384Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:26:43.17048Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.170402Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"19024f543fef3d0c","removed-member-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.170601Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-12T12:26:43.170719Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:26:43.170763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:26:43.170862Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.171098Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","error":"context canceled"}
	{"level":"warn","ts":"2024-08-12T12:26:43.171202Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"21d78cb68f18ad2f","error":"failed to read 21d78cb68f18ad2f on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-12T12:26:43.171373Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.17156Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f","error":"context canceled"}
	{"level":"info","ts":"2024-08-12T12:26:43.171736Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"19024f543fef3d0c","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:26:43.171825Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"info","ts":"2024-08-12T12:26:43.171842Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"19024f543fef3d0c","removed-remote-peer-id":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.186251Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"19024f543fef3d0c","remote-peer-id-stream-handler":"19024f543fef3d0c","remote-peer-id-from":"21d78cb68f18ad2f"}
	{"level":"warn","ts":"2024-08-12T12:26:43.192523Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"19024f543fef3d0c","remote-peer-id-stream-handler":"19024f543fef3d0c","remote-peer-id-from":"21d78cb68f18ad2f"}
	
	
	==> kernel <==
	 12:29:17 up 17 min,  0 users,  load average: 0.41, 0.45, 0.31
	Linux ha-220134 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf25bb773e6910615369f8bbc73dd8013fda797428c492d006b5f65d0d742945] <==
	I0812 12:28:36.812982       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:28:46.818474       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:28:46.818536       1 main.go:299] handling current node
	I0812 12:28:46.818566       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:28:46.818573       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:28:46.818761       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:28:46.818790       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:28:56.818493       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:28:56.818602       1 main.go:299] handling current node
	I0812 12:28:56.818639       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:28:56.818649       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:28:56.818858       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:28:56.818893       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:29:06.819389       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:29:06.819470       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:29:06.819639       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:29:06.819665       1 main.go:299] handling current node
	I0812 12:29:06.819686       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:29:06.819691       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:29:16.821599       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:29:16.821671       1 main.go:299] handling current node
	I0812 12:29:16.821745       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:29:16.821752       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:29:16.821897       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:29:16.821910       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ec1c98b0147f28e45bb638a0673501a1b960454afc8e9ed6564cd23626536dfa] <==
	I0812 12:22:22.006576       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:22.006681       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:22.006849       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:22.006873       1 main.go:299] handling current node
	I0812 12:22:22.006894       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:22.006916       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:22.007011       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:22.007042       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:22:31.997821       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:31.998020       1 main.go:299] handling current node
	I0812 12:22:31.998121       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:31.998128       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:31.998733       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:31.998760       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	I0812 12:22:31.998945       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:31.998969       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:42.001356       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0812 12:22:42.001408       1 main.go:322] Node ha-220134-m04 has CIDR [10.244.3.0/24] 
	I0812 12:22:42.001566       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0812 12:22:42.001593       1 main.go:299] handling current node
	I0812 12:22:42.001607       1 main.go:295] Handling node with IPs: map[192.168.39.215:{}]
	I0812 12:22:42.001611       1 main.go:322] Node ha-220134-m02 has CIDR [10.244.1.0/24] 
	I0812 12:22:42.001661       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I0812 12:22:42.001683       1 main.go:322] Node ha-220134-m03 has CIDR [10.244.2.0/24] 
	E0812 12:22:47.156971       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [3f8928354b6b44b302b1041563331d144538effc10a250f664073f207d5e315e] <==
	I0812 12:25:06.215571       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:25:06.215650       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:25:06.307472       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 12:25:06.307523       1 policy_source.go:224] refreshing policies
	I0812 12:25:06.313765       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 12:25:06.314259       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 12:25:06.315913       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 12:25:06.320403       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 12:25:06.320758       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 12:25:06.324747       1 aggregator.go:165] initial CRD sync complete...
	I0812 12:25:06.324819       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 12:25:06.324845       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 12:25:06.332744       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0812 12:25:06.354230       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.215]
	I0812 12:25:06.355609       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:25:06.382831       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0812 12:25:06.390854       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0812 12:25:06.392479       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 12:25:06.413016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 12:25:06.413136       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 12:25:06.428612       1 cache.go:39] Caches are synced for autoregister controller
	I0812 12:25:06.422677       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 12:25:07.220627       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 12:25:07.614891       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.186 192.168.39.215 192.168.39.228]
	W0812 12:25:17.633091       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.228]
	
	
	==> kube-apiserver [f2a17dd84b58bc6e0bcef7503e97e1db6a315d39b0b80a0c3673bb2277a75d2e] <==
	I0812 12:24:25.749413       1 options.go:221] external host was not specified, using 192.168.39.228
	I0812 12:24:25.761668       1 server.go:148] Version: v1.30.3
	I0812 12:24:25.761735       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:24:26.430954       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0812 12:24:26.450484       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0812 12:24:26.450609       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0812 12:24:26.450890       1 instance.go:299] Using reconciler: lease
	I0812 12:24:26.451809       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0812 12:24:46.430577       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 12:24:46.430577       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0812 12:24:46.452337       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0812 12:24:46.452337       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [14dbe639118d7953c60bf1135194a2becb72dbf8e0546876fc7e4eaa1bc6fb0e] <==
	I0812 12:24:26.734644       1 serving.go:380] Generated self-signed cert in-memory
	I0812 12:24:27.166259       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0812 12:24:27.166348       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:24:27.176885       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:24:27.177057       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:24:27.177550       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 12:24:27.177694       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0812 12:24:47.458955       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.228:8443/healthz\": dial tcp 192.168.39.228:8443: connect: connection refused"
	
	
	==> kube-controller-manager [af4d1d3b9cb9403ec400957b907e0ae4c27e0ef9e59bfe50a31b5327a1184823] <==
	E0812 12:26:59.365256       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:26:59.365351       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:27:19.366443       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:27:19.366500       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:27:19.366510       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:27:19.366515       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	E0812 12:27:19.366520       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220134-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220134-m03"
	I0812 12:27:33.633140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.975535ms"
	I0812 12:27:33.634610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="173.941µs"
	I0812 12:27:33.690668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.523455ms"
	I0812 12:27:33.692555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.665µs"
	I0812 12:27:33.700883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.36171ms"
	I0812 12:27:33.701168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.848µs"
	I0812 12:27:33.816701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.55148ms"
	I0812 12:27:33.817928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.387µs"
	I0812 12:28:02.397885       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-nvc9d\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 12:28:02.398138       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"12078584-fc59-4d04-a0c4-2e588b785852", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-nvc9d": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:28:02.444131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.85498ms"
	E0812 12:28:02.444237       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:28:02.444553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="173.299µs"
	I0812 12:28:02.446699       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-nvc9d\": the object has been modified; please apply your changes to the latest version and try again"
	I0812 12:28:02.446832       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"12078584-fc59-4d04-a0c4-2e588b785852", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-nvc9d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-nvc9d": the object has been modified; please apply your changes to the latest version and try again
	I0812 12:28:02.449741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="320.626µs"
	I0812 12:28:02.574584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.553361ms"
	I0812 12:28:02.575076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.459µs"
	
	
	==> kube-proxy [43dd48710573db9ae05623260417c87a086227a51cf88e4a73f4be9877f69d1e] <==
	E0812 12:21:25.379915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:25.379990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:25.380035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.098824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.098912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.098848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.098943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:32.099085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:32.099188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:40.996051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:40.996562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:40.996703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:40.996894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:44.068359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:44.068497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:21:59.428682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:21:59.428804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:05.572434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:05.572495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:05.572648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:05.572685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220134&resourceVersion=1959": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:36.292017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:36.292799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1921": dial tcp 192.168.39.254:8443: connect: no route to host
	W0812 12:22:48.579789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0812 12:22:48.579914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [aa48279fba2d2cecf58b321e1e15b4603f37a22a652d56e10fdc373093534d56] <==
	I0812 12:24:26.437535       1 server_linux.go:69] "Using iptables proxy"
	E0812 12:24:26.887569       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:29.955138       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:33.027442       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:39.171015       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0812 12:24:48.387125       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220134\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0812 12:25:06.823695       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	I0812 12:25:06.920535       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:25:06.920689       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:25:06.920733       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:25:06.929944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:25:06.930216       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:25:06.930252       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:25:06.935197       1 config.go:192] "Starting service config controller"
	I0812 12:25:06.935320       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:25:06.935357       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:25:06.935416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:25:06.941907       1 config.go:319] "Starting node config controller"
	I0812 12:25:06.941949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:25:07.037049       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:25:07.037333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:25:07.042123       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d63c5a66780a6cbffe7e7cc6087be5b71d5dbd11e57bea254810ed32e7e20b74] <==
	W0812 12:25:02.299686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.228:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:02.299809       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.228:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:03.144867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.228:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:03.144944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.228:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:03.760169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:03.760228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.228:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.034888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.228:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.035027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.228:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.053791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.228:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.053895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.228:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:04.309833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.228:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	E0812 12:25:04.309876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.228:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.228:8443: connect: connection refused
	W0812 12:25:06.243382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 12:25:06.245659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 12:25:06.245683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:25:06.245804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:25:06.245599       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:25:06.245889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0812 12:25:06.245584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:25:06.245942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0812 12:25:07.967709       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 12:26:39.869577       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-899g7\": pod busybox-fc5497c4f-899g7 is already assigned to node \"ha-220134-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-899g7" node="ha-220134-m04"
	E0812 12:26:39.869875       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a1d1f6d5-49d6-4479-bc80-2a8d546b9e9e(default/busybox-fc5497c4f-899g7) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-899g7"
	E0812 12:26:39.869950       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-899g7\": pod busybox-fc5497c4f-899g7 is already assigned to node \"ha-220134-m04\"" pod="default/busybox-fc5497c4f-899g7"
	I0812 12:26:39.870095       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-899g7" node="ha-220134-m04"
	
	
	==> kube-scheduler [e302617a6e799cf77839a408282e31da72879c4f1079e46ceaf2ac82f63e4768] <==
	E0812 12:22:44.361973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:44.898374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:44.898426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:45.195091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 12:22:45.195147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 12:22:45.336684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 12:22:45.336743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 12:22:45.368251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 12:22:45.368430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 12:22:46.130202       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:46.130233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 12:22:46.274686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 12:22:46.274736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0812 12:22:46.280119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 12:22:46.280230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 12:22:46.412994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:22:46.413099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:22:46.924043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:22:46.924073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:22:47.270415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 12:22:47.270450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0812 12:22:48.685041       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0812 12:22:48.685180       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:22:48.685375       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0812 12:22:48.685586       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 12 12:27:49 ha-220134 kubelet[1373]: E0812 12:27:49.889218    1373 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Aug 12 12:27:53 ha-220134 kubelet[1373]: E0812 12:27:53.041598    1373 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-220134?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 12 12:27:53 ha-220134 kubelet[1373]: I0812 12:27:53.041760    1373 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967019    1373 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: E0812 12:27:58.967168    1373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-220134?timeout=10s\": http2: client connection lost" interval="200ms"
	Aug 12 12:27:58 ha-220134 kubelet[1373]: I0812 12:27:58.967446    1373 status_manager.go:853] "Failed to get status for pod" podUID="bca65bc5-3ba1-44be-8606-f8235cf9b3d0" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": http2: client connection lost"
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967051    1373 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967073    1373 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967091    1373 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967243    1373 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967265    1373 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967505    1373 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967527    1373 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:27:58 ha-220134 kubelet[1373]: W0812 12:27:58.967019    1373 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 12 12:28:00 ha-220134 kubelet[1373]: I0812 12:28:00.148588    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-220134" podStartSLOduration=122.148549023 podStartE2EDuration="2m2.148549023s" podCreationTimestamp="2024-08-12 12:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-12 12:26:02.298922972 +0000 UTC m=+830.151521310" watchObservedRunningTime="2024-08-12 12:28:00.148549023 +0000 UTC m=+948.001147360"
	Aug 12 12:28:12 ha-220134 kubelet[1373]: E0812 12:28:12.306069    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:28:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:28:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:28:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:28:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:29:12 ha-220134 kubelet[1373]: E0812 12:29:12.307640    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:29:12 ha-220134 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:29:12 ha-220134 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:29:12 ha-220134 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:29:12 ha-220134 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 12:29:16.436546  494465 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19411-463103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220134 -n ha-220134
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-276573
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-276573
E0812 12:45:44.619102  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-276573: exit status 82 (2m1.884414526s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-276573-m03"  ...
	* Stopping node "multinode-276573-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-276573" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-276573 --wait=true -v=8 --alsologtostderr
E0812 12:48:47.665529  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:50:44.615586  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-276573 --wait=true -v=8 --alsologtostderr: (3m19.533172513s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-276573
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-276573 -n multinode-276573
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-276573 logs -n 25: (1.543428336s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573:/home/docker/cp-test_multinode-276573-m02_multinode-276573.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573 sudo cat                                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m02_multinode-276573.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03:/home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573-m03 sudo cat                                   | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp testdata/cp-test.txt                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573:/home/docker/cp-test_multinode-276573-m03_multinode-276573.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573 sudo cat                                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02:/home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573-m02 sudo cat                                   | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-276573 node stop m03                                                          | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	| node    | multinode-276573 node start                                                             | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| stop    | -p multinode-276573                                                                     | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| start   | -p multinode-276573                                                                     | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:47 UTC | 12 Aug 24 12:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:50 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:47:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:47:40.048744  504120 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:47:40.049033  504120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:47:40.049043  504120 out.go:304] Setting ErrFile to fd 2...
	I0812 12:47:40.049049  504120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:47:40.049309  504120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:47:40.049863  504120 out.go:298] Setting JSON to false
	I0812 12:47:40.050912  504120 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16191,"bootTime":1723450669,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:47:40.050979  504120 start.go:139] virtualization: kvm guest
	I0812 12:47:40.053375  504120 out.go:177] * [multinode-276573] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:47:40.054937  504120 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:47:40.054999  504120 notify.go:220] Checking for updates...
	I0812 12:47:40.058058  504120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:47:40.059638  504120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:47:40.061023  504120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:47:40.062224  504120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:47:40.063473  504120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:47:40.065310  504120 config.go:182] Loaded profile config "multinode-276573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:47:40.065414  504120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:47:40.065796  504120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:47:40.065851  504120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:47:40.081241  504120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0812 12:47:40.081747  504120 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:47:40.082322  504120 main.go:141] libmachine: Using API Version  1
	I0812 12:47:40.082343  504120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:47:40.082767  504120 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:47:40.082968  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.120461  504120 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 12:47:40.121946  504120 start.go:297] selected driver: kvm2
	I0812 12:47:40.121979  504120 start.go:901] validating driver "kvm2" against &{Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:47:40.122212  504120 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:47:40.122678  504120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:47:40.122800  504120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:47:40.138490  504120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:47:40.139239  504120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:47:40.139315  504120 cni.go:84] Creating CNI manager for ""
	I0812 12:47:40.139331  504120 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 12:47:40.139402  504120 start.go:340] cluster config:
	{Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:47:40.139565  504120 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:47:40.141929  504120 out.go:177] * Starting "multinode-276573" primary control-plane node in "multinode-276573" cluster
	I0812 12:47:40.143695  504120 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:47:40.143740  504120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:47:40.143753  504120 cache.go:56] Caching tarball of preloaded images
	I0812 12:47:40.143836  504120 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:47:40.143848  504120 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:47:40.143979  504120 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/config.json ...
	I0812 12:47:40.144194  504120 start.go:360] acquireMachinesLock for multinode-276573: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:47:40.144252  504120 start.go:364] duration metric: took 37.135µs to acquireMachinesLock for "multinode-276573"
	I0812 12:47:40.144273  504120 start.go:96] Skipping create...Using existing machine configuration
	I0812 12:47:40.144282  504120 fix.go:54] fixHost starting: 
	I0812 12:47:40.144561  504120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:47:40.144627  504120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:47:40.159477  504120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0812 12:47:40.159964  504120 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:47:40.160443  504120 main.go:141] libmachine: Using API Version  1
	I0812 12:47:40.160464  504120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:47:40.160858  504120 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:47:40.161036  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.161197  504120 main.go:141] libmachine: (multinode-276573) Calling .GetState
	I0812 12:47:40.162968  504120 fix.go:112] recreateIfNeeded on multinode-276573: state=Running err=<nil>
	W0812 12:47:40.162991  504120 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 12:47:40.165379  504120 out.go:177] * Updating the running kvm2 "multinode-276573" VM ...
	I0812 12:47:40.166961  504120 machine.go:94] provisionDockerMachine start ...
	I0812 12:47:40.166987  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.167217  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.169733  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.170267  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.170310  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.170437  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.170651  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.170833  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.170962  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.171144  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.171421  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.171438  504120 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 12:47:40.287045  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-276573
	
	I0812 12:47:40.287078  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.287363  504120 buildroot.go:166] provisioning hostname "multinode-276573"
	I0812 12:47:40.287391  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.287631  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.290708  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.291099  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.291139  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.291349  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.291596  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.291766  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.291895  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.292093  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.292271  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.292286  504120 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-276573 && echo "multinode-276573" | sudo tee /etc/hostname
	I0812 12:47:40.414263  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-276573
	
	I0812 12:47:40.414307  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.417551  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.417990  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.418027  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.418251  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.418480  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.418792  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.419002  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.419213  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.419407  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.419430  504120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-276573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-276573/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-276573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:47:40.526493  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:47:40.526524  504120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:47:40.526545  504120 buildroot.go:174] setting up certificates
	I0812 12:47:40.526554  504120 provision.go:84] configureAuth start
	I0812 12:47:40.526563  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.526968  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:47:40.529836  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.530233  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.530255  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.530419  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.532806  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.533244  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.533276  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.533436  504120 provision.go:143] copyHostCerts
	I0812 12:47:40.533484  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:47:40.533539  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:47:40.533552  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:47:40.533636  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:47:40.533793  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:47:40.533824  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:47:40.533832  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:47:40.533881  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:47:40.533968  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:47:40.533991  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:47:40.533998  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:47:40.534038  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:47:40.534121  504120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.multinode-276573 san=[127.0.0.1 192.168.39.187 localhost minikube multinode-276573]
	I0812 12:47:40.585907  504120 provision.go:177] copyRemoteCerts
	I0812 12:47:40.585987  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:47:40.586021  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.588986  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.589374  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.589395  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.589582  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.589820  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.589972  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.590099  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:47:40.673045  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:47:40.673138  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:47:40.699555  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:47:40.699644  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0812 12:47:40.725109  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:47:40.725202  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:47:40.750514  504120 provision.go:87] duration metric: took 223.943514ms to configureAuth
	I0812 12:47:40.750550  504120 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:47:40.750783  504120 config.go:182] Loaded profile config "multinode-276573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:47:40.750862  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.753798  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.754241  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.754267  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.754495  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.754695  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.754887  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.755027  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.755166  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.755343  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.755357  504120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:49:11.530675  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:49:11.530730  504120 machine.go:97] duration metric: took 1m31.363750407s to provisionDockerMachine
	I0812 12:49:11.530748  504120 start.go:293] postStartSetup for "multinode-276573" (driver="kvm2")
	I0812 12:49:11.530761  504120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:49:11.530833  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.531215  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:49:11.531247  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.534668  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.535140  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.535172  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.535364  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.535577  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.535744  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.535916  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.620902  504120 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:49:11.625459  504120 command_runner.go:130] > NAME=Buildroot
	I0812 12:49:11.625483  504120 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0812 12:49:11.625488  504120 command_runner.go:130] > ID=buildroot
	I0812 12:49:11.625495  504120 command_runner.go:130] > VERSION_ID=2023.02.9
	I0812 12:49:11.625502  504120 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0812 12:49:11.625597  504120 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:49:11.625616  504120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:49:11.625691  504120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:49:11.625764  504120 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:49:11.625775  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:49:11.625870  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:49:11.636217  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:49:11.660891  504120 start.go:296] duration metric: took 130.125892ms for postStartSetup
	I0812 12:49:11.660939  504120 fix.go:56] duration metric: took 1m31.516658672s for fixHost
	I0812 12:49:11.660974  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.663806  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.664396  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.664420  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.664639  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.664868  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.665007  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.665219  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.665430  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:49:11.665607  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:49:11.665617  504120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:49:11.770234  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723466951.742527319
	
	I0812 12:49:11.770270  504120 fix.go:216] guest clock: 1723466951.742527319
	I0812 12:49:11.770282  504120 fix.go:229] Guest: 2024-08-12 12:49:11.742527319 +0000 UTC Remote: 2024-08-12 12:49:11.660949606 +0000 UTC m=+91.650205786 (delta=81.577713ms)
	I0812 12:49:11.770328  504120 fix.go:200] guest clock delta is within tolerance: 81.577713ms
	I0812 12:49:11.770338  504120 start.go:83] releasing machines lock for "multinode-276573", held for 1m31.626073217s
	I0812 12:49:11.770368  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.770684  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:49:11.773602  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.774005  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.774032  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.774248  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.774843  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.775057  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.775165  504120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:49:11.775206  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.775312  504120 ssh_runner.go:195] Run: cat /version.json
	I0812 12:49:11.775342  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.778235  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778262  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778627  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.778656  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778683  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.778704  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778756  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.778985  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.779026  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.779189  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.779191  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.779319  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.779469  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.779491  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.854108  504120 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0812 12:49:11.854495  504120 ssh_runner.go:195] Run: systemctl --version
	I0812 12:49:11.877824  504120 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0812 12:49:11.878466  504120 command_runner.go:130] > systemd 252 (252)
	I0812 12:49:11.878509  504120 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0812 12:49:11.878590  504120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:49:12.041662  504120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0812 12:49:12.050138  504120 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0812 12:49:12.050443  504120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:49:12.050523  504120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:49:12.060162  504120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 12:49:12.060188  504120 start.go:495] detecting cgroup driver to use...
	I0812 12:49:12.060252  504120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:49:12.077562  504120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:49:12.091967  504120 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:49:12.092045  504120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:49:12.106024  504120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:49:12.120311  504120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:49:12.268894  504120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:49:12.419018  504120 docker.go:233] disabling docker service ...
	I0812 12:49:12.419103  504120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:49:12.437657  504120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:49:12.451691  504120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:49:12.599201  504120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:49:12.739926  504120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:49:12.756616  504120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:49:12.777552  504120 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0812 12:49:12.778110  504120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:49:12.778187  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.790360  504120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:49:12.790446  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.801975  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.813550  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.824850  504120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:49:12.836662  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.848376  504120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.861217  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.872749  504120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:49:12.882337  504120 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0812 12:49:12.882522  504120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:49:12.893119  504120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:49:13.031098  504120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:49:13.306209  504120 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:49:13.306303  504120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:49:13.311596  504120 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0812 12:49:13.311619  504120 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0812 12:49:13.311627  504120 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0812 12:49:13.311635  504120 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 12:49:13.311639  504120 command_runner.go:130] > Access: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311646  504120 command_runner.go:130] > Modify: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311651  504120 command_runner.go:130] > Change: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311654  504120 command_runner.go:130] >  Birth: -
	I0812 12:49:13.311671  504120 start.go:563] Will wait 60s for crictl version
	I0812 12:49:13.311726  504120 ssh_runner.go:195] Run: which crictl
	I0812 12:49:13.316869  504120 command_runner.go:130] > /usr/bin/crictl
	I0812 12:49:13.316938  504120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:49:13.354320  504120 command_runner.go:130] > Version:  0.1.0
	I0812 12:49:13.354348  504120 command_runner.go:130] > RuntimeName:  cri-o
	I0812 12:49:13.354353  504120 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0812 12:49:13.354379  504120 command_runner.go:130] > RuntimeApiVersion:  v1
	I0812 12:49:13.355814  504120 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:49:13.355905  504120 ssh_runner.go:195] Run: crio --version
	I0812 12:49:13.385250  504120 command_runner.go:130] > crio version 1.29.1
	I0812 12:49:13.385275  504120 command_runner.go:130] > Version:        1.29.1
	I0812 12:49:13.385281  504120 command_runner.go:130] > GitCommit:      unknown
	I0812 12:49:13.385286  504120 command_runner.go:130] > GitCommitDate:  unknown
	I0812 12:49:13.385290  504120 command_runner.go:130] > GitTreeState:   clean
	I0812 12:49:13.385295  504120 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 12:49:13.385299  504120 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 12:49:13.385303  504120 command_runner.go:130] > Compiler:       gc
	I0812 12:49:13.385307  504120 command_runner.go:130] > Platform:       linux/amd64
	I0812 12:49:13.385312  504120 command_runner.go:130] > Linkmode:       dynamic
	I0812 12:49:13.385324  504120 command_runner.go:130] > BuildTags:      
	I0812 12:49:13.385331  504120 command_runner.go:130] >   containers_image_ostree_stub
	I0812 12:49:13.385337  504120 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 12:49:13.385347  504120 command_runner.go:130] >   btrfs_noversion
	I0812 12:49:13.385355  504120 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 12:49:13.385364  504120 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 12:49:13.385368  504120 command_runner.go:130] >   seccomp
	I0812 12:49:13.385373  504120 command_runner.go:130] > LDFlags:          unknown
	I0812 12:49:13.385377  504120 command_runner.go:130] > SeccompEnabled:   true
	I0812 12:49:13.385381  504120 command_runner.go:130] > AppArmorEnabled:  false
	I0812 12:49:13.386617  504120 ssh_runner.go:195] Run: crio --version
	I0812 12:49:13.417304  504120 command_runner.go:130] > crio version 1.29.1
	I0812 12:49:13.417337  504120 command_runner.go:130] > Version:        1.29.1
	I0812 12:49:13.417347  504120 command_runner.go:130] > GitCommit:      unknown
	I0812 12:49:13.417354  504120 command_runner.go:130] > GitCommitDate:  unknown
	I0812 12:49:13.417360  504120 command_runner.go:130] > GitTreeState:   clean
	I0812 12:49:13.417367  504120 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 12:49:13.417371  504120 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 12:49:13.417375  504120 command_runner.go:130] > Compiler:       gc
	I0812 12:49:13.417380  504120 command_runner.go:130] > Platform:       linux/amd64
	I0812 12:49:13.417384  504120 command_runner.go:130] > Linkmode:       dynamic
	I0812 12:49:13.417388  504120 command_runner.go:130] > BuildTags:      
	I0812 12:49:13.417392  504120 command_runner.go:130] >   containers_image_ostree_stub
	I0812 12:49:13.417397  504120 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 12:49:13.417401  504120 command_runner.go:130] >   btrfs_noversion
	I0812 12:49:13.417406  504120 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 12:49:13.417410  504120 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 12:49:13.417413  504120 command_runner.go:130] >   seccomp
	I0812 12:49:13.417418  504120 command_runner.go:130] > LDFlags:          unknown
	I0812 12:49:13.417422  504120 command_runner.go:130] > SeccompEnabled:   true
	I0812 12:49:13.417427  504120 command_runner.go:130] > AppArmorEnabled:  false
	I0812 12:49:13.420561  504120 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:49:13.422003  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:49:13.425319  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:13.425739  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:13.425782  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:13.426061  504120 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:49:13.431063  504120 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0812 12:49:13.431180  504120 kubeadm.go:883] updating cluster {Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:49:13.431335  504120 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:49:13.431380  504120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:49:13.484804  504120 command_runner.go:130] > {
	I0812 12:49:13.484829  504120 command_runner.go:130] >   "images": [
	I0812 12:49:13.484833  504120 command_runner.go:130] >     {
	I0812 12:49:13.484840  504120 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 12:49:13.484845  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.484851  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 12:49:13.484854  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484858  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.484870  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 12:49:13.484877  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 12:49:13.484880  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484885  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.484888  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.484892  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.484898  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.484908  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.484912  504120 command_runner.go:130] >     },
	I0812 12:49:13.484916  504120 command_runner.go:130] >     {
	I0812 12:49:13.484921  504120 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 12:49:13.484930  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.484935  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 12:49:13.484941  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484945  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.484951  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 12:49:13.484960  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 12:49:13.484964  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484968  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.484972  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.484981  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.484989  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.484992  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.484996  504120 command_runner.go:130] >     },
	I0812 12:49:13.484999  504120 command_runner.go:130] >     {
	I0812 12:49:13.485004  504120 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 12:49:13.485008  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485013  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 12:49:13.485016  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485020  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485029  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 12:49:13.485038  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 12:49:13.485042  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485047  504120 command_runner.go:130] >       "size": "1363676",
	I0812 12:49:13.485053  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485057  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485063  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485067  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485073  504120 command_runner.go:130] >     },
	I0812 12:49:13.485076  504120 command_runner.go:130] >     {
	I0812 12:49:13.485096  504120 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 12:49:13.485100  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485105  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 12:49:13.485113  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485120  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485127  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 12:49:13.485145  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 12:49:13.485151  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485156  504120 command_runner.go:130] >       "size": "31470524",
	I0812 12:49:13.485161  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485165  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485171  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485175  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485178  504120 command_runner.go:130] >     },
	I0812 12:49:13.485182  504120 command_runner.go:130] >     {
	I0812 12:49:13.485188  504120 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 12:49:13.485195  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485199  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 12:49:13.485203  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485207  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485214  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 12:49:13.485222  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 12:49:13.485226  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485230  504120 command_runner.go:130] >       "size": "61245718",
	I0812 12:49:13.485235  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485239  504120 command_runner.go:130] >       "username": "nonroot",
	I0812 12:49:13.485244  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485248  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485251  504120 command_runner.go:130] >     },
	I0812 12:49:13.485257  504120 command_runner.go:130] >     {
	I0812 12:49:13.485262  504120 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 12:49:13.485268  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485273  504120 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 12:49:13.485279  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485282  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485289  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 12:49:13.485298  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 12:49:13.485301  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485305  504120 command_runner.go:130] >       "size": "150779692",
	I0812 12:49:13.485316  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485322  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485326  504120 command_runner.go:130] >       },
	I0812 12:49:13.485332  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485336  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485342  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485346  504120 command_runner.go:130] >     },
	I0812 12:49:13.485352  504120 command_runner.go:130] >     {
	I0812 12:49:13.485358  504120 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 12:49:13.485364  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485369  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 12:49:13.485375  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485379  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485388  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 12:49:13.485397  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 12:49:13.485400  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485406  504120 command_runner.go:130] >       "size": "117609954",
	I0812 12:49:13.485409  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485415  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485419  504120 command_runner.go:130] >       },
	I0812 12:49:13.485425  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485429  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485435  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485438  504120 command_runner.go:130] >     },
	I0812 12:49:13.485443  504120 command_runner.go:130] >     {
	I0812 12:49:13.485449  504120 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 12:49:13.485455  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485460  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 12:49:13.485466  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485470  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485511  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 12:49:13.485522  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 12:49:13.485525  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485529  504120 command_runner.go:130] >       "size": "112198984",
	I0812 12:49:13.485532  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485542  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485552  504120 command_runner.go:130] >       },
	I0812 12:49:13.485556  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485560  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485563  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485567  504120 command_runner.go:130] >     },
	I0812 12:49:13.485570  504120 command_runner.go:130] >     {
	I0812 12:49:13.485575  504120 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 12:49:13.485586  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485590  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 12:49:13.485593  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485597  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485603  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 12:49:13.485610  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 12:49:13.485613  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485617  504120 command_runner.go:130] >       "size": "85953945",
	I0812 12:49:13.485621  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485624  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485627  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485631  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485634  504120 command_runner.go:130] >     },
	I0812 12:49:13.485637  504120 command_runner.go:130] >     {
	I0812 12:49:13.485642  504120 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 12:49:13.485646  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485660  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 12:49:13.485663  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485667  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485674  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 12:49:13.485683  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 12:49:13.485687  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485691  504120 command_runner.go:130] >       "size": "63051080",
	I0812 12:49:13.485694  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485698  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485702  504120 command_runner.go:130] >       },
	I0812 12:49:13.485708  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485712  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485716  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485724  504120 command_runner.go:130] >     },
	I0812 12:49:13.485730  504120 command_runner.go:130] >     {
	I0812 12:49:13.485736  504120 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 12:49:13.485739  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485744  504120 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 12:49:13.485747  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485757  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485766  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 12:49:13.485776  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 12:49:13.485781  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485785  504120 command_runner.go:130] >       "size": "750414",
	I0812 12:49:13.485788  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485792  504120 command_runner.go:130] >         "value": "65535"
	I0812 12:49:13.485796  504120 command_runner.go:130] >       },
	I0812 12:49:13.485800  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485803  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485807  504120 command_runner.go:130] >       "pinned": true
	I0812 12:49:13.485810  504120 command_runner.go:130] >     }
	I0812 12:49:13.485813  504120 command_runner.go:130] >   ]
	I0812 12:49:13.485816  504120 command_runner.go:130] > }
	I0812 12:49:13.486227  504120 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:49:13.486252  504120 crio.go:433] Images already preloaded, skipping extraction
	I0812 12:49:13.486305  504120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:49:13.531044  504120 command_runner.go:130] > {
	I0812 12:49:13.531072  504120 command_runner.go:130] >   "images": [
	I0812 12:49:13.531077  504120 command_runner.go:130] >     {
	I0812 12:49:13.531085  504120 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 12:49:13.531091  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531105  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 12:49:13.531109  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531113  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531124  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 12:49:13.531131  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 12:49:13.531137  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531144  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.531148  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531153  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531159  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531163  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531166  504120 command_runner.go:130] >     },
	I0812 12:49:13.531169  504120 command_runner.go:130] >     {
	I0812 12:49:13.531175  504120 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 12:49:13.531183  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531189  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 12:49:13.531193  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531197  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531204  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 12:49:13.531213  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 12:49:13.531217  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531220  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.531224  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531234  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531237  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531242  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531253  504120 command_runner.go:130] >     },
	I0812 12:49:13.531260  504120 command_runner.go:130] >     {
	I0812 12:49:13.531266  504120 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 12:49:13.531270  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531276  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 12:49:13.531279  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531283  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531292  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 12:49:13.531299  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 12:49:13.531305  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531308  504120 command_runner.go:130] >       "size": "1363676",
	I0812 12:49:13.531313  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531317  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531321  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531325  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531331  504120 command_runner.go:130] >     },
	I0812 12:49:13.531334  504120 command_runner.go:130] >     {
	I0812 12:49:13.531340  504120 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 12:49:13.531345  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531350  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 12:49:13.531353  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531357  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531364  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 12:49:13.531381  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 12:49:13.531387  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531392  504120 command_runner.go:130] >       "size": "31470524",
	I0812 12:49:13.531398  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531402  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531406  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531412  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531416  504120 command_runner.go:130] >     },
	I0812 12:49:13.531420  504120 command_runner.go:130] >     {
	I0812 12:49:13.531426  504120 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 12:49:13.531432  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531439  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 12:49:13.531447  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531460  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531469  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 12:49:13.531476  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 12:49:13.531482  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531486  504120 command_runner.go:130] >       "size": "61245718",
	I0812 12:49:13.531495  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531502  504120 command_runner.go:130] >       "username": "nonroot",
	I0812 12:49:13.531506  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531509  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531513  504120 command_runner.go:130] >     },
	I0812 12:49:13.531517  504120 command_runner.go:130] >     {
	I0812 12:49:13.531522  504120 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 12:49:13.531528  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531533  504120 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 12:49:13.531536  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531540  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531549  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 12:49:13.531558  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 12:49:13.531562  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531567  504120 command_runner.go:130] >       "size": "150779692",
	I0812 12:49:13.531572  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531576  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531582  504120 command_runner.go:130] >       },
	I0812 12:49:13.531586  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531592  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531596  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531600  504120 command_runner.go:130] >     },
	I0812 12:49:13.531605  504120 command_runner.go:130] >     {
	I0812 12:49:13.531611  504120 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 12:49:13.531618  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531624  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 12:49:13.531629  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531633  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531662  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 12:49:13.531676  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 12:49:13.531679  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531694  504120 command_runner.go:130] >       "size": "117609954",
	I0812 12:49:13.531698  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531702  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531705  504120 command_runner.go:130] >       },
	I0812 12:49:13.531708  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531712  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531716  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531719  504120 command_runner.go:130] >     },
	I0812 12:49:13.531722  504120 command_runner.go:130] >     {
	I0812 12:49:13.531727  504120 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 12:49:13.531731  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531736  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 12:49:13.531740  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531743  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531766  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 12:49:13.531775  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 12:49:13.531781  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531785  504120 command_runner.go:130] >       "size": "112198984",
	I0812 12:49:13.531791  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531795  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531801  504120 command_runner.go:130] >       },
	I0812 12:49:13.531805  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531811  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531815  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531820  504120 command_runner.go:130] >     },
	I0812 12:49:13.531823  504120 command_runner.go:130] >     {
	I0812 12:49:13.531832  504120 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 12:49:13.531838  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531843  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 12:49:13.531848  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531852  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531858  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 12:49:13.531871  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 12:49:13.531877  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531883  504120 command_runner.go:130] >       "size": "85953945",
	I0812 12:49:13.531889  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531898  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531905  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531909  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531914  504120 command_runner.go:130] >     },
	I0812 12:49:13.531918  504120 command_runner.go:130] >     {
	I0812 12:49:13.531926  504120 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 12:49:13.531932  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531937  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 12:49:13.531941  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531945  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531952  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 12:49:13.531962  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 12:49:13.531966  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531971  504120 command_runner.go:130] >       "size": "63051080",
	I0812 12:49:13.531974  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531978  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531982  504120 command_runner.go:130] >       },
	I0812 12:49:13.531993  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531999  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.532003  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.532006  504120 command_runner.go:130] >     },
	I0812 12:49:13.532009  504120 command_runner.go:130] >     {
	I0812 12:49:13.532015  504120 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 12:49:13.532022  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.532028  504120 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 12:49:13.532034  504120 command_runner.go:130] >       ],
	I0812 12:49:13.532040  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.532051  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 12:49:13.532063  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 12:49:13.532071  504120 command_runner.go:130] >       ],
	I0812 12:49:13.532077  504120 command_runner.go:130] >       "size": "750414",
	I0812 12:49:13.532085  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.532092  504120 command_runner.go:130] >         "value": "65535"
	I0812 12:49:13.532100  504120 command_runner.go:130] >       },
	I0812 12:49:13.532104  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.532111  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.532120  504120 command_runner.go:130] >       "pinned": true
	I0812 12:49:13.532126  504120 command_runner.go:130] >     }
	I0812 12:49:13.532129  504120 command_runner.go:130] >   ]
	I0812 12:49:13.532132  504120 command_runner.go:130] > }
	I0812 12:49:13.532453  504120 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:49:13.532472  504120 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:49:13.532480  504120 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.30.3 crio true true} ...
	I0812 12:49:13.532621  504120 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-276573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:49:13.532693  504120 ssh_runner.go:195] Run: crio config
	I0812 12:49:13.572341  504120 command_runner.go:130] ! time="2024-08-12 12:49:13.544681131Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0812 12:49:13.578206  504120 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0812 12:49:13.585313  504120 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0812 12:49:13.585341  504120 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0812 12:49:13.585350  504120 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0812 12:49:13.585354  504120 command_runner.go:130] > #
	I0812 12:49:13.585364  504120 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0812 12:49:13.585374  504120 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0812 12:49:13.585383  504120 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0812 12:49:13.585405  504120 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0812 12:49:13.585414  504120 command_runner.go:130] > # reload'.
	I0812 12:49:13.585425  504120 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0812 12:49:13.585438  504120 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0812 12:49:13.585451  504120 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0812 12:49:13.585464  504120 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0812 12:49:13.585473  504120 command_runner.go:130] > [crio]
	I0812 12:49:13.585483  504120 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0812 12:49:13.585495  504120 command_runner.go:130] > # containers images, in this directory.
	I0812 12:49:13.585505  504120 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0812 12:49:13.585522  504120 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0812 12:49:13.585533  504120 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0812 12:49:13.585551  504120 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0812 12:49:13.585560  504120 command_runner.go:130] > # imagestore = ""
	I0812 12:49:13.585571  504120 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0812 12:49:13.585584  504120 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0812 12:49:13.585592  504120 command_runner.go:130] > storage_driver = "overlay"
	I0812 12:49:13.585605  504120 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0812 12:49:13.585617  504120 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0812 12:49:13.585642  504120 command_runner.go:130] > storage_option = [
	I0812 12:49:13.585653  504120 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0812 12:49:13.585660  504120 command_runner.go:130] > ]
	I0812 12:49:13.585672  504120 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0812 12:49:13.585685  504120 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0812 12:49:13.585696  504120 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0812 12:49:13.585708  504120 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0812 12:49:13.585719  504120 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0812 12:49:13.585729  504120 command_runner.go:130] > # always happen on a node reboot
	I0812 12:49:13.585740  504120 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0812 12:49:13.585762  504120 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0812 12:49:13.585779  504120 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0812 12:49:13.585790  504120 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0812 12:49:13.585799  504120 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0812 12:49:13.585812  504120 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0812 12:49:13.585827  504120 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0812 12:49:13.585836  504120 command_runner.go:130] > # internal_wipe = true
	I0812 12:49:13.585850  504120 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0812 12:49:13.585862  504120 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0812 12:49:13.585872  504120 command_runner.go:130] > # internal_repair = false
	I0812 12:49:13.585883  504120 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0812 12:49:13.585896  504120 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0812 12:49:13.585909  504120 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0812 12:49:13.585920  504120 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0812 12:49:13.585930  504120 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0812 12:49:13.585939  504120 command_runner.go:130] > [crio.api]
	I0812 12:49:13.585948  504120 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0812 12:49:13.585959  504120 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0812 12:49:13.585975  504120 command_runner.go:130] > # IP address on which the stream server will listen.
	I0812 12:49:13.585985  504120 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0812 12:49:13.585998  504120 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0812 12:49:13.586009  504120 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0812 12:49:13.586017  504120 command_runner.go:130] > # stream_port = "0"
	I0812 12:49:13.586029  504120 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0812 12:49:13.586037  504120 command_runner.go:130] > # stream_enable_tls = false
	I0812 12:49:13.586049  504120 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0812 12:49:13.586065  504120 command_runner.go:130] > # stream_idle_timeout = ""
	I0812 12:49:13.586078  504120 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0812 12:49:13.586089  504120 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0812 12:49:13.586099  504120 command_runner.go:130] > # minutes.
	I0812 12:49:13.586109  504120 command_runner.go:130] > # stream_tls_cert = ""
	I0812 12:49:13.586120  504120 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0812 12:49:13.586133  504120 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0812 12:49:13.586142  504120 command_runner.go:130] > # stream_tls_key = ""
	I0812 12:49:13.586152  504120 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0812 12:49:13.586164  504120 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0812 12:49:13.586203  504120 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0812 12:49:13.586213  504120 command_runner.go:130] > # stream_tls_ca = ""
	I0812 12:49:13.586225  504120 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 12:49:13.586234  504120 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0812 12:49:13.586248  504120 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 12:49:13.586259  504120 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0812 12:49:13.586271  504120 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0812 12:49:13.586283  504120 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0812 12:49:13.586291  504120 command_runner.go:130] > [crio.runtime]
	I0812 12:49:13.586302  504120 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0812 12:49:13.586314  504120 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0812 12:49:13.586323  504120 command_runner.go:130] > # "nofile=1024:2048"
	I0812 12:49:13.586334  504120 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0812 12:49:13.586353  504120 command_runner.go:130] > # default_ulimits = [
	I0812 12:49:13.586362  504120 command_runner.go:130] > # ]
	I0812 12:49:13.586373  504120 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0812 12:49:13.586382  504120 command_runner.go:130] > # no_pivot = false
	I0812 12:49:13.586392  504120 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0812 12:49:13.586405  504120 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0812 12:49:13.586415  504120 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0812 12:49:13.586428  504120 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0812 12:49:13.586439  504120 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0812 12:49:13.586453  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 12:49:13.586464  504120 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0812 12:49:13.586474  504120 command_runner.go:130] > # Cgroup setting for conmon
	I0812 12:49:13.586488  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0812 12:49:13.586512  504120 command_runner.go:130] > conmon_cgroup = "pod"
	I0812 12:49:13.586525  504120 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0812 12:49:13.586535  504120 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0812 12:49:13.586560  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 12:49:13.586569  504120 command_runner.go:130] > conmon_env = [
	I0812 12:49:13.586579  504120 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 12:49:13.586586  504120 command_runner.go:130] > ]
	I0812 12:49:13.586595  504120 command_runner.go:130] > # Additional environment variables to set for all the
	I0812 12:49:13.586606  504120 command_runner.go:130] > # containers. These are overridden if set in the
	I0812 12:49:13.586616  504120 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0812 12:49:13.586626  504120 command_runner.go:130] > # default_env = [
	I0812 12:49:13.586633  504120 command_runner.go:130] > # ]
	I0812 12:49:13.586643  504120 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0812 12:49:13.586659  504120 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0812 12:49:13.586668  504120 command_runner.go:130] > # selinux = false
	I0812 12:49:13.586678  504120 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0812 12:49:13.586690  504120 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0812 12:49:13.586699  504120 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0812 12:49:13.586709  504120 command_runner.go:130] > # seccomp_profile = ""
	I0812 12:49:13.586720  504120 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0812 12:49:13.586732  504120 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0812 12:49:13.586745  504120 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0812 12:49:13.586756  504120 command_runner.go:130] > # which might increase security.
	I0812 12:49:13.586765  504120 command_runner.go:130] > # This option is currently deprecated,
	I0812 12:49:13.586777  504120 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0812 12:49:13.586786  504120 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0812 12:49:13.586798  504120 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0812 12:49:13.586809  504120 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0812 12:49:13.586822  504120 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0812 12:49:13.586835  504120 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0812 12:49:13.586847  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.586857  504120 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0812 12:49:13.586869  504120 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0812 12:49:13.586879  504120 command_runner.go:130] > # the cgroup blockio controller.
	I0812 12:49:13.586889  504120 command_runner.go:130] > # blockio_config_file = ""
	I0812 12:49:13.586903  504120 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0812 12:49:13.586919  504120 command_runner.go:130] > # blockio parameters.
	I0812 12:49:13.586929  504120 command_runner.go:130] > # blockio_reload = false
	I0812 12:49:13.586941  504120 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0812 12:49:13.586950  504120 command_runner.go:130] > # irqbalance daemon.
	I0812 12:49:13.586961  504120 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0812 12:49:13.586973  504120 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0812 12:49:13.586991  504120 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0812 12:49:13.587005  504120 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0812 12:49:13.587018  504120 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0812 12:49:13.587030  504120 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0812 12:49:13.587040  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.587050  504120 command_runner.go:130] > # rdt_config_file = ""
	I0812 12:49:13.587062  504120 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0812 12:49:13.587070  504120 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0812 12:49:13.587115  504120 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0812 12:49:13.587125  504120 command_runner.go:130] > # separate_pull_cgroup = ""
	I0812 12:49:13.587135  504120 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0812 12:49:13.587148  504120 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0812 12:49:13.587157  504120 command_runner.go:130] > # will be added.
	I0812 12:49:13.587164  504120 command_runner.go:130] > # default_capabilities = [
	I0812 12:49:13.587173  504120 command_runner.go:130] > # 	"CHOWN",
	I0812 12:49:13.587181  504120 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0812 12:49:13.587190  504120 command_runner.go:130] > # 	"FSETID",
	I0812 12:49:13.587197  504120 command_runner.go:130] > # 	"FOWNER",
	I0812 12:49:13.587205  504120 command_runner.go:130] > # 	"SETGID",
	I0812 12:49:13.587212  504120 command_runner.go:130] > # 	"SETUID",
	I0812 12:49:13.587221  504120 command_runner.go:130] > # 	"SETPCAP",
	I0812 12:49:13.587228  504120 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0812 12:49:13.587237  504120 command_runner.go:130] > # 	"KILL",
	I0812 12:49:13.587244  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587256  504120 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0812 12:49:13.587270  504120 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0812 12:49:13.587281  504120 command_runner.go:130] > # add_inheritable_capabilities = false
	I0812 12:49:13.587293  504120 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0812 12:49:13.587305  504120 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 12:49:13.587314  504120 command_runner.go:130] > default_sysctls = [
	I0812 12:49:13.587329  504120 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0812 12:49:13.587338  504120 command_runner.go:130] > ]
	I0812 12:49:13.587346  504120 command_runner.go:130] > # List of devices on the host that a
	I0812 12:49:13.587359  504120 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0812 12:49:13.587369  504120 command_runner.go:130] > # allowed_devices = [
	I0812 12:49:13.587377  504120 command_runner.go:130] > # 	"/dev/fuse",
	I0812 12:49:13.587383  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587392  504120 command_runner.go:130] > # List of additional devices. specified as
	I0812 12:49:13.587406  504120 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0812 12:49:13.587418  504120 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0812 12:49:13.587430  504120 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 12:49:13.587441  504120 command_runner.go:130] > # additional_devices = [
	I0812 12:49:13.587448  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587458  504120 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0812 12:49:13.587468  504120 command_runner.go:130] > # cdi_spec_dirs = [
	I0812 12:49:13.587476  504120 command_runner.go:130] > # 	"/etc/cdi",
	I0812 12:49:13.587484  504120 command_runner.go:130] > # 	"/var/run/cdi",
	I0812 12:49:13.587489  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587500  504120 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0812 12:49:13.587513  504120 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0812 12:49:13.587522  504120 command_runner.go:130] > # Defaults to false.
	I0812 12:49:13.587532  504120 command_runner.go:130] > # device_ownership_from_security_context = false
	I0812 12:49:13.587550  504120 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0812 12:49:13.587562  504120 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0812 12:49:13.587571  504120 command_runner.go:130] > # hooks_dir = [
	I0812 12:49:13.587586  504120 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0812 12:49:13.587594  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587605  504120 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0812 12:49:13.587619  504120 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0812 12:49:13.587631  504120 command_runner.go:130] > # its default mounts from the following two files:
	I0812 12:49:13.587639  504120 command_runner.go:130] > #
	I0812 12:49:13.587650  504120 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0812 12:49:13.587662  504120 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0812 12:49:13.587672  504120 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0812 12:49:13.587680  504120 command_runner.go:130] > #
	I0812 12:49:13.587689  504120 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0812 12:49:13.587710  504120 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0812 12:49:13.587723  504120 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0812 12:49:13.587734  504120 command_runner.go:130] > #      only add mounts it finds in this file.
	I0812 12:49:13.587742  504120 command_runner.go:130] > #
	I0812 12:49:13.587750  504120 command_runner.go:130] > # default_mounts_file = ""
	I0812 12:49:13.587761  504120 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0812 12:49:13.587773  504120 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0812 12:49:13.587782  504120 command_runner.go:130] > pids_limit = 1024
	I0812 12:49:13.587794  504120 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0812 12:49:13.587806  504120 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0812 12:49:13.587819  504120 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0812 12:49:13.587835  504120 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0812 12:49:13.587845  504120 command_runner.go:130] > # log_size_max = -1
	I0812 12:49:13.587858  504120 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0812 12:49:13.587868  504120 command_runner.go:130] > # log_to_journald = false
	I0812 12:49:13.587885  504120 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0812 12:49:13.587896  504120 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0812 12:49:13.587908  504120 command_runner.go:130] > # Path to directory for container attach sockets.
	I0812 12:49:13.587918  504120 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0812 12:49:13.587927  504120 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0812 12:49:13.587937  504120 command_runner.go:130] > # bind_mount_prefix = ""
	I0812 12:49:13.587949  504120 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0812 12:49:13.587957  504120 command_runner.go:130] > # read_only = false
	I0812 12:49:13.587968  504120 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0812 12:49:13.587980  504120 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0812 12:49:13.587990  504120 command_runner.go:130] > # live configuration reload.
	I0812 12:49:13.587998  504120 command_runner.go:130] > # log_level = "info"
	I0812 12:49:13.588009  504120 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0812 12:49:13.588021  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.588029  504120 command_runner.go:130] > # log_filter = ""
	I0812 12:49:13.588042  504120 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0812 12:49:13.588058  504120 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0812 12:49:13.588067  504120 command_runner.go:130] > # separated by comma.
	I0812 12:49:13.588082  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588091  504120 command_runner.go:130] > # uid_mappings = ""
	I0812 12:49:13.588104  504120 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0812 12:49:13.588124  504120 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0812 12:49:13.588134  504120 command_runner.go:130] > # separated by comma.
	I0812 12:49:13.588147  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588157  504120 command_runner.go:130] > # gid_mappings = ""
	I0812 12:49:13.588167  504120 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0812 12:49:13.588180  504120 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 12:49:13.588192  504120 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 12:49:13.588207  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588218  504120 command_runner.go:130] > # minimum_mappable_uid = -1
	I0812 12:49:13.588231  504120 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0812 12:49:13.588244  504120 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 12:49:13.588254  504120 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 12:49:13.588270  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588280  504120 command_runner.go:130] > # minimum_mappable_gid = -1
	I0812 12:49:13.588294  504120 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0812 12:49:13.588307  504120 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0812 12:49:13.588319  504120 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0812 12:49:13.588327  504120 command_runner.go:130] > # ctr_stop_timeout = 30
	I0812 12:49:13.588336  504120 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0812 12:49:13.588349  504120 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0812 12:49:13.588359  504120 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0812 12:49:13.588374  504120 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0812 12:49:13.588389  504120 command_runner.go:130] > drop_infra_ctr = false
	I0812 12:49:13.588402  504120 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0812 12:49:13.588414  504120 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0812 12:49:13.588429  504120 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0812 12:49:13.588438  504120 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0812 12:49:13.588452  504120 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0812 12:49:13.588464  504120 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0812 12:49:13.588477  504120 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0812 12:49:13.588489  504120 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0812 12:49:13.588498  504120 command_runner.go:130] > # shared_cpuset = ""
	I0812 12:49:13.588510  504120 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0812 12:49:13.588520  504120 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0812 12:49:13.588530  504120 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0812 12:49:13.588549  504120 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0812 12:49:13.588566  504120 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0812 12:49:13.588579  504120 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0812 12:49:13.588590  504120 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0812 12:49:13.588600  504120 command_runner.go:130] > # enable_criu_support = false
	I0812 12:49:13.588613  504120 command_runner.go:130] > # Enable/disable the generation of the container,
	I0812 12:49:13.588626  504120 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0812 12:49:13.588636  504120 command_runner.go:130] > # enable_pod_events = false
	I0812 12:49:13.588648  504120 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 12:49:13.588661  504120 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 12:49:13.588673  504120 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0812 12:49:13.588682  504120 command_runner.go:130] > # default_runtime = "runc"
	I0812 12:49:13.588692  504120 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0812 12:49:13.588707  504120 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0812 12:49:13.588724  504120 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0812 12:49:13.588735  504120 command_runner.go:130] > # creation as a file is not desired either.
	I0812 12:49:13.588751  504120 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0812 12:49:13.588761  504120 command_runner.go:130] > # the hostname is being managed dynamically.
	I0812 12:49:13.588771  504120 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0812 12:49:13.588778  504120 command_runner.go:130] > # ]
	I0812 12:49:13.588789  504120 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0812 12:49:13.588801  504120 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0812 12:49:13.588812  504120 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0812 12:49:13.588823  504120 command_runner.go:130] > # Each entry in the table should follow the format:
	I0812 12:49:13.588831  504120 command_runner.go:130] > #
	I0812 12:49:13.588839  504120 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0812 12:49:13.588850  504120 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0812 12:49:13.588920  504120 command_runner.go:130] > # runtime_type = "oci"
	I0812 12:49:13.588931  504120 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0812 12:49:13.588938  504120 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0812 12:49:13.588945  504120 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0812 12:49:13.588953  504120 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0812 12:49:13.588963  504120 command_runner.go:130] > # monitor_env = []
	I0812 12:49:13.588971  504120 command_runner.go:130] > # privileged_without_host_devices = false
	I0812 12:49:13.588982  504120 command_runner.go:130] > # allowed_annotations = []
	I0812 12:49:13.588992  504120 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0812 12:49:13.589000  504120 command_runner.go:130] > # Where:
	I0812 12:49:13.589023  504120 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0812 12:49:13.589036  504120 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0812 12:49:13.589047  504120 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0812 12:49:13.589060  504120 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0812 12:49:13.589070  504120 command_runner.go:130] > #   in $PATH.
	I0812 12:49:13.589100  504120 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0812 12:49:13.589111  504120 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0812 12:49:13.589121  504120 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0812 12:49:13.589130  504120 command_runner.go:130] > #   state.
	I0812 12:49:13.589141  504120 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0812 12:49:13.589153  504120 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0812 12:49:13.589167  504120 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0812 12:49:13.589179  504120 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0812 12:49:13.589192  504120 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0812 12:49:13.589206  504120 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0812 12:49:13.589217  504120 command_runner.go:130] > #   The currently recognized values are:
	I0812 12:49:13.589229  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0812 12:49:13.589244  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0812 12:49:13.589256  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0812 12:49:13.589270  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0812 12:49:13.589284  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0812 12:49:13.589297  504120 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0812 12:49:13.589309  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0812 12:49:13.589329  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0812 12:49:13.589342  504120 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0812 12:49:13.589355  504120 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0812 12:49:13.589365  504120 command_runner.go:130] > #   deprecated option "conmon".
	I0812 12:49:13.589379  504120 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0812 12:49:13.589388  504120 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0812 12:49:13.589400  504120 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0812 12:49:13.589411  504120 command_runner.go:130] > #   should be moved to the container's cgroup
	I0812 12:49:13.589422  504120 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0812 12:49:13.589433  504120 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0812 12:49:13.589445  504120 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0812 12:49:13.589456  504120 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0812 12:49:13.589464  504120 command_runner.go:130] > #
	I0812 12:49:13.589480  504120 command_runner.go:130] > # Using the seccomp notifier feature:
	I0812 12:49:13.589488  504120 command_runner.go:130] > #
	I0812 12:49:13.589498  504120 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0812 12:49:13.589511  504120 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0812 12:49:13.589519  504120 command_runner.go:130] > #
	I0812 12:49:13.589530  504120 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0812 12:49:13.589542  504120 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0812 12:49:13.589554  504120 command_runner.go:130] > #
	I0812 12:49:13.589565  504120 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0812 12:49:13.589573  504120 command_runner.go:130] > # feature.
	I0812 12:49:13.589579  504120 command_runner.go:130] > #
	I0812 12:49:13.589592  504120 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0812 12:49:13.589605  504120 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0812 12:49:13.589618  504120 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0812 12:49:13.589631  504120 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0812 12:49:13.589643  504120 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0812 12:49:13.589651  504120 command_runner.go:130] > #
	I0812 12:49:13.589662  504120 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0812 12:49:13.589674  504120 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0812 12:49:13.589682  504120 command_runner.go:130] > #
	I0812 12:49:13.589697  504120 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0812 12:49:13.589710  504120 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0812 12:49:13.589717  504120 command_runner.go:130] > #
	I0812 12:49:13.589728  504120 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0812 12:49:13.589741  504120 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0812 12:49:13.589747  504120 command_runner.go:130] > # limitation.
	I0812 12:49:13.589785  504120 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0812 12:49:13.589806  504120 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0812 12:49:13.589815  504120 command_runner.go:130] > runtime_type = "oci"
	I0812 12:49:13.589824  504120 command_runner.go:130] > runtime_root = "/run/runc"
	I0812 12:49:13.589832  504120 command_runner.go:130] > runtime_config_path = ""
	I0812 12:49:13.589843  504120 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0812 12:49:13.589852  504120 command_runner.go:130] > monitor_cgroup = "pod"
	I0812 12:49:13.589860  504120 command_runner.go:130] > monitor_exec_cgroup = ""
	I0812 12:49:13.589867  504120 command_runner.go:130] > monitor_env = [
	I0812 12:49:13.589878  504120 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 12:49:13.589895  504120 command_runner.go:130] > ]
	I0812 12:49:13.589907  504120 command_runner.go:130] > privileged_without_host_devices = false
	I0812 12:49:13.589920  504120 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0812 12:49:13.589932  504120 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0812 12:49:13.589945  504120 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0812 12:49:13.589961  504120 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0812 12:49:13.589976  504120 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0812 12:49:13.589989  504120 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0812 12:49:13.590007  504120 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0812 12:49:13.590021  504120 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0812 12:49:13.590028  504120 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0812 12:49:13.590037  504120 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0812 12:49:13.590043  504120 command_runner.go:130] > # Example:
	I0812 12:49:13.590051  504120 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0812 12:49:13.590058  504120 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0812 12:49:13.590065  504120 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0812 12:49:13.590073  504120 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0812 12:49:13.590080  504120 command_runner.go:130] > # cpuset = 0
	I0812 12:49:13.590086  504120 command_runner.go:130] > # cpushares = "0-1"
	I0812 12:49:13.590092  504120 command_runner.go:130] > # Where:
	I0812 12:49:13.590100  504120 command_runner.go:130] > # The workload name is workload-type.
	I0812 12:49:13.590113  504120 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0812 12:49:13.590122  504120 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0812 12:49:13.590131  504120 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0812 12:49:13.590143  504120 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0812 12:49:13.590152  504120 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0812 12:49:13.590160  504120 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0812 12:49:13.590170  504120 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0812 12:49:13.590178  504120 command_runner.go:130] > # Default value is set to true
	I0812 12:49:13.590185  504120 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0812 12:49:13.590195  504120 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0812 12:49:13.590202  504120 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0812 12:49:13.590209  504120 command_runner.go:130] > # Default value is set to 'false'
	I0812 12:49:13.590216  504120 command_runner.go:130] > # disable_hostport_mapping = false
	I0812 12:49:13.590226  504120 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0812 12:49:13.590231  504120 command_runner.go:130] > #
	I0812 12:49:13.590247  504120 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0812 12:49:13.590260  504120 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0812 12:49:13.590273  504120 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0812 12:49:13.590284  504120 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0812 12:49:13.590296  504120 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0812 12:49:13.590306  504120 command_runner.go:130] > [crio.image]
	I0812 12:49:13.590316  504120 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0812 12:49:13.590326  504120 command_runner.go:130] > # default_transport = "docker://"
	I0812 12:49:13.590338  504120 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0812 12:49:13.590352  504120 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0812 12:49:13.590361  504120 command_runner.go:130] > # global_auth_file = ""
	I0812 12:49:13.590370  504120 command_runner.go:130] > # The image used to instantiate infra containers.
	I0812 12:49:13.590381  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.590390  504120 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0812 12:49:13.590404  504120 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0812 12:49:13.590416  504120 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0812 12:49:13.590427  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.590438  504120 command_runner.go:130] > # pause_image_auth_file = ""
	I0812 12:49:13.590449  504120 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0812 12:49:13.590459  504120 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0812 12:49:13.590472  504120 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0812 12:49:13.590484  504120 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0812 12:49:13.590495  504120 command_runner.go:130] > # pause_command = "/pause"
	I0812 12:49:13.590506  504120 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0812 12:49:13.590519  504120 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0812 12:49:13.590541  504120 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0812 12:49:13.590562  504120 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0812 12:49:13.590575  504120 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0812 12:49:13.590588  504120 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0812 12:49:13.590598  504120 command_runner.go:130] > # pinned_images = [
	I0812 12:49:13.590605  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590617  504120 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0812 12:49:13.590628  504120 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0812 12:49:13.590642  504120 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0812 12:49:13.590655  504120 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0812 12:49:13.590666  504120 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0812 12:49:13.590683  504120 command_runner.go:130] > # signature_policy = ""
	I0812 12:49:13.590696  504120 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0812 12:49:13.590710  504120 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0812 12:49:13.590723  504120 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0812 12:49:13.590737  504120 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0812 12:49:13.590749  504120 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0812 12:49:13.590761  504120 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0812 12:49:13.590774  504120 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0812 12:49:13.590787  504120 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0812 12:49:13.590795  504120 command_runner.go:130] > # changing them here.
	I0812 12:49:13.590803  504120 command_runner.go:130] > # insecure_registries = [
	I0812 12:49:13.590811  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590822  504120 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0812 12:49:13.590833  504120 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0812 12:49:13.590843  504120 command_runner.go:130] > # image_volumes = "mkdir"
	I0812 12:49:13.590853  504120 command_runner.go:130] > # Temporary directory to use for storing big files
	I0812 12:49:13.590861  504120 command_runner.go:130] > # big_files_temporary_dir = ""
	I0812 12:49:13.590874  504120 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0812 12:49:13.590883  504120 command_runner.go:130] > # CNI plugins.
	I0812 12:49:13.590890  504120 command_runner.go:130] > [crio.network]
	I0812 12:49:13.590903  504120 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0812 12:49:13.590915  504120 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0812 12:49:13.590926  504120 command_runner.go:130] > # cni_default_network = ""
	I0812 12:49:13.590939  504120 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0812 12:49:13.590948  504120 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0812 12:49:13.590957  504120 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0812 12:49:13.590966  504120 command_runner.go:130] > # plugin_dirs = [
	I0812 12:49:13.590973  504120 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0812 12:49:13.590981  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590991  504120 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0812 12:49:13.591000  504120 command_runner.go:130] > [crio.metrics]
	I0812 12:49:13.591009  504120 command_runner.go:130] > # Globally enable or disable metrics support.
	I0812 12:49:13.591019  504120 command_runner.go:130] > enable_metrics = true
	I0812 12:49:13.591028  504120 command_runner.go:130] > # Specify enabled metrics collectors.
	I0812 12:49:13.591038  504120 command_runner.go:130] > # Per default all metrics are enabled.
	I0812 12:49:13.591048  504120 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0812 12:49:13.591068  504120 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0812 12:49:13.591081  504120 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0812 12:49:13.591091  504120 command_runner.go:130] > # metrics_collectors = [
	I0812 12:49:13.591100  504120 command_runner.go:130] > # 	"operations",
	I0812 12:49:13.591109  504120 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0812 12:49:13.591124  504120 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0812 12:49:13.591135  504120 command_runner.go:130] > # 	"operations_errors",
	I0812 12:49:13.591144  504120 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0812 12:49:13.591152  504120 command_runner.go:130] > # 	"image_pulls_by_name",
	I0812 12:49:13.591163  504120 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0812 12:49:13.591171  504120 command_runner.go:130] > # 	"image_pulls_failures",
	I0812 12:49:13.591179  504120 command_runner.go:130] > # 	"image_pulls_successes",
	I0812 12:49:13.591187  504120 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0812 12:49:13.591195  504120 command_runner.go:130] > # 	"image_layer_reuse",
	I0812 12:49:13.591203  504120 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0812 12:49:13.591213  504120 command_runner.go:130] > # 	"containers_oom_total",
	I0812 12:49:13.591222  504120 command_runner.go:130] > # 	"containers_oom",
	I0812 12:49:13.591229  504120 command_runner.go:130] > # 	"processes_defunct",
	I0812 12:49:13.591236  504120 command_runner.go:130] > # 	"operations_total",
	I0812 12:49:13.591246  504120 command_runner.go:130] > # 	"operations_latency_seconds",
	I0812 12:49:13.591255  504120 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0812 12:49:13.591265  504120 command_runner.go:130] > # 	"operations_errors_total",
	I0812 12:49:13.591273  504120 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0812 12:49:13.591284  504120 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0812 12:49:13.591293  504120 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0812 12:49:13.591300  504120 command_runner.go:130] > # 	"image_pulls_success_total",
	I0812 12:49:13.591308  504120 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0812 12:49:13.591316  504120 command_runner.go:130] > # 	"containers_oom_count_total",
	I0812 12:49:13.591327  504120 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0812 12:49:13.591337  504120 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0812 12:49:13.591343  504120 command_runner.go:130] > # ]
	I0812 12:49:13.591352  504120 command_runner.go:130] > # The port on which the metrics server will listen.
	I0812 12:49:13.591359  504120 command_runner.go:130] > # metrics_port = 9090
	I0812 12:49:13.591371  504120 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0812 12:49:13.591381  504120 command_runner.go:130] > # metrics_socket = ""
	I0812 12:49:13.591391  504120 command_runner.go:130] > # The certificate for the secure metrics server.
	I0812 12:49:13.591410  504120 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0812 12:49:13.591423  504120 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0812 12:49:13.591434  504120 command_runner.go:130] > # certificate on any modification event.
	I0812 12:49:13.591442  504120 command_runner.go:130] > # metrics_cert = ""
	I0812 12:49:13.591453  504120 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0812 12:49:13.591464  504120 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0812 12:49:13.591472  504120 command_runner.go:130] > # metrics_key = ""
	I0812 12:49:13.591484  504120 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0812 12:49:13.591494  504120 command_runner.go:130] > [crio.tracing]
	I0812 12:49:13.591504  504120 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0812 12:49:13.591513  504120 command_runner.go:130] > # enable_tracing = false
	I0812 12:49:13.591523  504120 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0812 12:49:13.591532  504120 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0812 12:49:13.591550  504120 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0812 12:49:13.591561  504120 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0812 12:49:13.591572  504120 command_runner.go:130] > # CRI-O NRI configuration.
	I0812 12:49:13.591580  504120 command_runner.go:130] > [crio.nri]
	I0812 12:49:13.591588  504120 command_runner.go:130] > # Globally enable or disable NRI.
	I0812 12:49:13.591602  504120 command_runner.go:130] > # enable_nri = false
	I0812 12:49:13.591612  504120 command_runner.go:130] > # NRI socket to listen on.
	I0812 12:49:13.591620  504120 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0812 12:49:13.591629  504120 command_runner.go:130] > # NRI plugin directory to use.
	I0812 12:49:13.591637  504120 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0812 12:49:13.591649  504120 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0812 12:49:13.591659  504120 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0812 12:49:13.591669  504120 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0812 12:49:13.591679  504120 command_runner.go:130] > # nri_disable_connections = false
	I0812 12:49:13.591691  504120 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0812 12:49:13.591699  504120 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0812 12:49:13.591707  504120 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0812 12:49:13.591718  504120 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0812 12:49:13.591731  504120 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0812 12:49:13.591740  504120 command_runner.go:130] > [crio.stats]
	I0812 12:49:13.591750  504120 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0812 12:49:13.591762  504120 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0812 12:49:13.591772  504120 command_runner.go:130] > # stats_collection_period = 0
	I0812 12:49:13.591963  504120 cni.go:84] Creating CNI manager for ""
	I0812 12:49:13.591979  504120 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 12:49:13.591995  504120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:49:13.592026  504120 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-276573 NodeName:multinode-276573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:49:13.592205  504120 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-276573"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:49:13.592287  504120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:49:13.602993  504120 command_runner.go:130] > kubeadm
	I0812 12:49:13.603020  504120 command_runner.go:130] > kubectl
	I0812 12:49:13.603025  504120 command_runner.go:130] > kubelet
	I0812 12:49:13.603045  504120 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:49:13.603101  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:49:13.613036  504120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0812 12:49:13.630443  504120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:49:13.647619  504120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0812 12:49:13.665459  504120 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0812 12:49:13.669965  504120 command_runner.go:130] > 192.168.39.187	control-plane.minikube.internal
	I0812 12:49:13.670179  504120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:49:13.822660  504120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:49:13.838557  504120 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573 for IP: 192.168.39.187
	I0812 12:49:13.838585  504120 certs.go:194] generating shared ca certs ...
	I0812 12:49:13.838609  504120 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:49:13.838852  504120 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:49:13.838922  504120 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:49:13.838935  504120 certs.go:256] generating profile certs ...
	I0812 12:49:13.839058  504120 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/client.key
	I0812 12:49:13.839144  504120 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key.8ffd67ec
	I0812 12:49:13.839198  504120 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key
	I0812 12:49:13.839214  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:49:13.839235  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:49:13.839252  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:49:13.839268  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:49:13.839282  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:49:13.839301  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:49:13.839319  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:49:13.839335  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:49:13.839396  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:49:13.839441  504120 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:49:13.839452  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:49:13.839491  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:49:13.839536  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:49:13.839574  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:49:13.839631  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:49:13.839688  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:49:13.839709  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:13.839734  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:49:13.840657  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:49:13.868791  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:49:13.894809  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:49:13.921370  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:49:13.948316  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 12:49:13.975097  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:49:14.002642  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:49:14.028118  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 12:49:14.053009  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:49:14.076832  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:49:14.102494  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:49:14.127311  504120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:49:14.144820  504120 ssh_runner.go:195] Run: openssl version
	I0812 12:49:14.150848  504120 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0812 12:49:14.150944  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:49:14.161931  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166600  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166653  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166695  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.172513  504120 command_runner.go:130] > 3ec20f2e
	I0812 12:49:14.172595  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:49:14.182703  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:49:14.194410  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199814  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199858  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199906  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.206190  504120 command_runner.go:130] > b5213941
	I0812 12:49:14.206297  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:49:14.216949  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:49:14.228448  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233179  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233219  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233266  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.238863  504120 command_runner.go:130] > 51391683
	I0812 12:49:14.238957  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:49:14.248320  504120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:49:14.252975  504120 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:49:14.253008  504120 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0812 12:49:14.253016  504120 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0812 12:49:14.253025  504120 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 12:49:14.253036  504120 command_runner.go:130] > Access: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253043  504120 command_runner.go:130] > Modify: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253050  504120 command_runner.go:130] > Change: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253058  504120 command_runner.go:130] >  Birth: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253188  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 12:49:14.258934  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.259087  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 12:49:14.264841  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.264904  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 12:49:14.270480  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.270646  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 12:49:14.276205  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.276278  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 12:49:14.282367  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.282555  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 12:49:14.288371  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.288435  504120 kubeadm.go:392] StartCluster: {Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:49:14.288563  504120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:49:14.288638  504120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:49:14.329243  504120 command_runner.go:130] > 4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809
	I0812 12:49:14.329279  504120 command_runner.go:130] > aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb
	I0812 12:49:14.329287  504120 command_runner.go:130] > fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd
	I0812 12:49:14.329296  504120 command_runner.go:130] > 129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c
	I0812 12:49:14.329304  504120 command_runner.go:130] > af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c
	I0812 12:49:14.329312  504120 command_runner.go:130] > 419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510
	I0812 12:49:14.329320  504120 command_runner.go:130] > e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9
	I0812 12:49:14.329331  504120 command_runner.go:130] > 877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63
	I0812 12:49:14.330630  504120 cri.go:89] found id: "4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809"
	I0812 12:49:14.330644  504120 cri.go:89] found id: "aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb"
	I0812 12:49:14.330647  504120 cri.go:89] found id: "fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd"
	I0812 12:49:14.330650  504120 cri.go:89] found id: "129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c"
	I0812 12:49:14.330652  504120 cri.go:89] found id: "af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c"
	I0812 12:49:14.330655  504120 cri.go:89] found id: "419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510"
	I0812 12:49:14.330658  504120 cri.go:89] found id: "e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9"
	I0812 12:49:14.330661  504120 cri.go:89] found id: "877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63"
	I0812 12:49:14.330663  504120 cri.go:89] found id: ""
	I0812 12:49:14.330710  504120 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.233558999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467060233536217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9849b371-0df0-4a88-b720-d54b3d397022 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.234166903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15d93e98-43d5-4f09-9b5b-bd3113f92e56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.234246437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15d93e98-43d5-4f09-9b5b-bd3113f92e56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.234610285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15d93e98-43d5-4f09-9b5b-bd3113f92e56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.282768495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f36c3015-806e-49bc-bd1c-70b7e681fc58 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.282862670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f36c3015-806e-49bc-bd1c-70b7e681fc58 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.284368512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6287f8d0-c5a5-4488-bf8a-cfb7704b1e40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.285234961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467060285152755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6287f8d0-c5a5-4488-bf8a-cfb7704b1e40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.286005403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcf54d90-be7c-4bab-b0a3-03c7b335656c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.286080988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcf54d90-be7c-4bab-b0a3-03c7b335656c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.286422370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcf54d90-be7c-4bab-b0a3-03c7b335656c name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.329879200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29a3e25e-ff0b-40e5-a424-4c4e70114415 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.330012799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29a3e25e-ff0b-40e5-a424-4c4e70114415 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.331145958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92b70f44-4588-47cd-9c94-ca3aafbf2f73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.331556925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467060331535777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92b70f44-4588-47cd-9c94-ca3aafbf2f73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.332080968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8bd0d30-a10c-416f-a08f-ca22e7de5a47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.332156422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8bd0d30-a10c-416f-a08f-ca22e7de5a47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.332509278Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8bd0d30-a10c-416f-a08f-ca22e7de5a47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.375857347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d1745f4-2136-4049-88e6-377d872c8677 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.375986327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d1745f4-2136-4049-88e6-377d872c8677 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.377163159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=221ac330-7929-4f75-bed4-4bb40d2e3362 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.377564788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467060377543578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=221ac330-7929-4f75-bed4-4bb40d2e3362 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.378162610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e02e6f07-ec9e-4fdb-af2f-50d8c2e3944d name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.378237261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e02e6f07-ec9e-4fdb-af2f-50d8c2e3944d name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:51:00 multinode-276573 crio[2883]: time="2024-08-12 12:51:00.378594500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e02e6f07-ec9e-4fdb-af2f-50d8c2e3944d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6be38c95e851       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6ea92517fa40e       busybox-fc5497c4f-9sww5
	f3867db2c4181       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   396e4d8906205       kindnet-xmzhc
	e84014b3790e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   9f624d8bd2d16       coredns-7db6d8ff4d-x69zs
	4793b358f0325       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   4fc8add5d7bc4       storage-provisioner
	7c4267fc77c67       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   890a7895a53a4       kube-proxy-bhzlc
	a669daee81213       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   0bee4ca84f606       kube-scheduler-multinode-276573
	1a964c2e9317b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   695095a29617f       kube-controller-manager-multinode-276573
	33defdcc7b94e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   b5306eef235ef       etcd-multinode-276573
	1bb5565b5f8ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   0afcb0871f794       kube-apiserver-multinode-276573
	7ed20778843a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   c878191fe3850       busybox-fc5497c4f-9sww5
	4bdc093262451       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   2fa953bfaab3e       storage-provisioner
	aaf10a04808d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   5d040b3b690d8       coredns-7db6d8ff4d-x69zs
	fdc6683739c7f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago        Exited              kindnet-cni               0                   26fe210b6c97f       kindnet-xmzhc
	129aad74969bd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   95dda882abfaa       kube-proxy-bhzlc
	af96c3a99e025       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   c14c34d249185       kube-apiserver-multinode-276573
	419ac7b21b8f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   45b0266e2265f       kube-scheduler-multinode-276573
	e4af25a66f030       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   a1c85aa476c4e       kube-controller-manager-multinode-276573
	877dafd292234       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   4b5e66867dfff       etcd-multinode-276573
	
	
	==> coredns [aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb] <==
	[INFO] 10.244.1.2:48282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515075s
	[INFO] 10.244.1.2:38786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153234s
	[INFO] 10.244.1.2:41294 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094185s
	[INFO] 10.244.1.2:50512 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001439884s
	[INFO] 10.244.1.2:47980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148447s
	[INFO] 10.244.1.2:46916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010897s
	[INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101687s
	[INFO] 10.244.0.3:46326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106386s
	[INFO] 10.244.0.3:39133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058126s
	[INFO] 10.244.0.3:43809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090478s
	[INFO] 10.244.0.3:43710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046522s
	[INFO] 10.244.1.2:37133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00025321s
	[INFO] 10.244.1.2:44121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008955s
	[INFO] 10.244.1.2:44473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063934s
	[INFO] 10.244.1.2:44808 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006562s
	[INFO] 10.244.0.3:39778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111851s
	[INFO] 10.244.0.3:48316 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175516s
	[INFO] 10.244.0.3:44888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109089s
	[INFO] 10.244.0.3:45339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071058s
	[INFO] 10.244.1.2:60909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239096s
	[INFO] 10.244.1.2:47228 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093107s
	[INFO] 10.244.1.2:46141 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092147s
	[INFO] 10.244.1.2:34310 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101304s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51041 - 24476 "HINFO IN 7729721158021257501.2693719872358529416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015310431s
	
	
	==> describe nodes <==
	Name:               multinode-276573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-276573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=multinode-276573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_42_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:42:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-276573
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    multinode-276573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac32fc823814aeba709a3e679b19cf4
	  System UUID:                4ac32fc8-2381-4aeb-a709-a3e679b19cf4
	  Boot ID:                    4e7fe0b1-4961-44d9-a7f5-a38dfc27ced5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sww5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 coredns-7db6d8ff4d-x69zs                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m24s
	  kube-system                 etcd-multinode-276573                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kindnet-xmzhc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m24s
	  kube-system                 kube-apiserver-multinode-276573             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-controller-manager-multinode-276573    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-proxy-bhzlc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-scheduler-multinode-276573             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m22s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m37s                kubelet          Node multinode-276573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s                kubelet          Node multinode-276573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s                kubelet          Node multinode-276573 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m37s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m25s                node-controller  Node multinode-276573 event: Registered Node multinode-276573 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node multinode-276573 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node multinode-276573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node multinode-276573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node multinode-276573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                  node-controller  Node multinode-276573 event: Registered Node multinode-276573 in Controller
	
	
	Name:               multinode-276573-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-276573-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=multinode-276573
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_49_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:49:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-276573-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:50:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:49:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:49:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:49:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-276573-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bbfcd02c8104f5a9a0265512559dca8
	  System UUID:                7bbfcd02-c810-4f5a-9a02-65512559dca8
	  Boot ID:                    be877137-ffd4-4a49-9c38-1ae0ada80d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwms8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-z8nqg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m35s
	  kube-system                 kube-proxy-vvt5d           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m30s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-276573-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-276573-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-276573-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-276573-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-276573-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-276573-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-276573-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-276573-m02 status is now: NodeReady
	
	
	Name:               multinode-276573-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-276573-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=multinode-276573
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_50_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:50:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-276573-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:50:57 +0000   Mon, 12 Aug 2024 12:50:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:50:57 +0000   Mon, 12 Aug 2024 12:50:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:50:57 +0000   Mon, 12 Aug 2024 12:50:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:50:57 +0000   Mon, 12 Aug 2024 12:50:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    multinode-276573-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b77bc90174e64dd6b1c28248cdab4dd1
	  System UUID:                b77bc901-74e6-4dd6-b1c2-8248cdab4dd1
	  Boot ID:                    1085acf7-e35c-47c3-9f25-0998ea72f124
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2pgzl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m36s
	  kube-system                 kube-proxy-jpdwd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m37s)  kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m37s)  kubelet          Node multinode-276573-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m37s)  kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet          Node multinode-276573-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-276573-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-276573-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-276573-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-276573-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-276573-m03 event: Registered Node multinode-276573-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-276573-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.068341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068718] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.163659] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.143142] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.262224] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.283669] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.713843] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.068052] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.006774] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.092483] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.701978] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.978465] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.180619] kauditd_printk_skb: 56 callbacks suppressed
	[Aug12 12:43] kauditd_printk_skb: 14 callbacks suppressed
	[Aug12 12:49] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.151047] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.176033] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.141807] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.298217] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +0.784673] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +2.063646] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +4.683224] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.058363] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.357162] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[ +20.509706] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248] <==
	{"level":"info","ts":"2024-08-12T12:49:17.154169Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T12:49:17.154434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 switched to configuration voters=(17951008399345981744)"}
	{"level":"info","ts":"2024-08-12T12:49:17.154505Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","added-peer-id":"f91ecb07db121930","added-peer-peer-urls":["https://192.168.39.187:2380"]}
	{"level":"info","ts":"2024-08-12T12:49:17.15464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:49:17.15468Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:49:17.165424Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T12:49:17.165699Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f91ecb07db121930","initial-advertise-peer-urls":["https://192.168.39.187:2380"],"listen-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.187:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T12:49:17.165792Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T12:49:17.167293Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:49:17.176937Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:49:18.685656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 received MsgPreVoteResp from f91ecb07db121930 at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.685933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 received MsgVoteResp from f91ecb07db121930 at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.686032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became leader at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.686059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f91ecb07db121930 elected leader f91ecb07db121930 at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.692555Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f91ecb07db121930","local-member-attributes":"{Name:multinode-276573 ClientURLs:[https://192.168.39.187:2379]}","request-path":"/0/members/f91ecb07db121930/attributes","cluster-id":"c7f008ff80693278","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T12:49:18.692566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T12:49:18.692805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T12:49:18.692849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T12:49:18.692612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T12:49:18.694834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.187:2379"}
	{"level":"info","ts":"2024-08-12T12:49:18.695567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-12T12:50:06.864105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.519199ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382394613610 > lease_revoke:<id:19309146a2c11eb6>","response":"size:29"}
	
	
	==> etcd [877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63] <==
	{"level":"info","ts":"2024-08-12T12:42:18.808746Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:42:18.808859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:42:18.808907Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-08-12T12:43:25.146567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.95453ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382287368347 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:193091469c5cb09a>","response":"size:42"}
	{"level":"info","ts":"2024-08-12T12:43:25.146785Z","caller":"traceutil/trace.go:171","msg":"trace[611996480] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:469; }","duration":"113.04454ms","start":"2024-08-12T12:43:25.033726Z","end":"2024-08-12T12:43:25.146771Z","steps":["trace[611996480] 'read index received'  (duration: 8.723607ms)","trace[611996480] 'applied index is now lower than readState.Index'  (duration: 104.320358ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:43:25.147298Z","caller":"traceutil/trace.go:171","msg":"trace[1409015642] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"172.545048ms","start":"2024-08-12T12:43:24.974738Z","end":"2024-08-12T12:43:25.147283Z","steps":["trace[1409015642] 'process raft request'  (duration: 172.000958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:25.147589Z","caller":"traceutil/trace.go:171","msg":"trace[1403735309] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"238.141131ms","start":"2024-08-12T12:43:24.909438Z","end":"2024-08-12T12:43:25.14758Z","steps":["trace[1403735309] 'process raft request'  (duration: 237.22749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:43:25.148033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.216252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-276573-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-08-12T12:43:25.14808Z","caller":"traceutil/trace.go:171","msg":"trace[791982244] range","detail":"{range_begin:/registry/minions/multinode-276573-m02; range_end:; response_count:1; response_revision:449; }","duration":"114.361358ms","start":"2024-08-12T12:43:25.03371Z","end":"2024-08-12T12:43:25.148071Z","steps":["trace[791982244] 'agreement among raft nodes before linearized reading'  (duration: 114.202949ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:26.071237Z","caller":"traceutil/trace.go:171","msg":"trace[2128249433] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"181.898761ms","start":"2024-08-12T12:43:25.889288Z","end":"2024-08-12T12:43:26.071186Z","steps":["trace[2128249433] 'process raft request'  (duration: 181.714174ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:29.618824Z","caller":"traceutil/trace.go:171","msg":"trace[215396676] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"114.356141ms","start":"2024-08-12T12:43:29.504447Z","end":"2024-08-12T12:43:29.618803Z","steps":["trace[215396676] 'process raft request'  (duration: 114.237454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:44:24.241875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.161215ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382287368811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:579 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-12T12:44:24.242712Z","caller":"traceutil/trace.go:171","msg":"trace[418331014] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"224.355485ms","start":"2024-08-12T12:44:24.018337Z","end":"2024-08-12T12:44:24.242693Z","steps":["trace[418331014] 'process raft request'  (duration: 224.141983ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:44:24.242915Z","caller":"traceutil/trace.go:171","msg":"trace[1260288387] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"171.727666ms","start":"2024-08-12T12:44:24.071171Z","end":"2024-08-12T12:44:24.242898Z","steps":["trace[1260288387] 'process raft request'  (duration: 171.360274ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:44:24.242917Z","caller":"traceutil/trace.go:171","msg":"trace[587866683] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"235.676604ms","start":"2024-08-12T12:44:24.007228Z","end":"2024-08-12T12:44:24.242905Z","steps":["trace[587866683] 'process raft request'  (duration: 121.846199ms)","trace[587866683] 'compare'  (duration: 112.079893ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:47:40.880303Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T12:47:40.880417Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-276573","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	{"level":"warn","ts":"2024-08-12T12:47:40.880514Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.882368Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.957119Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.957273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T12:47:40.957395Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f91ecb07db121930","current-leader-member-id":"f91ecb07db121930"}
	{"level":"info","ts":"2024-08-12T12:47:40.960089Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:47:40.960246Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:47:40.960283Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-276573","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	
	
	==> kernel <==
	 12:51:00 up 9 min,  0 users,  load average: 0.42, 0.23, 0.12
	Linux multinode-276573 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85] <==
	I0812 12:50:11.892404       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:50:21.891427       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:50:21.891520       1 main.go:299] handling current node
	I0812 12:50:21.891548       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:50:21.891565       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:50:21.891719       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:50:21.891743       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:50:31.892430       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:50:31.892572       1 main.go:299] handling current node
	I0812 12:50:31.892609       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:50:31.892629       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:50:31.892827       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:50:31.892885       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:50:41.893596       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:50:41.893643       1 main.go:299] handling current node
	I0812 12:50:41.893659       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:50:41.893665       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:50:41.893796       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:50:41.893828       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.2.0/24] 
	I0812 12:50:51.894018       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:50:51.894076       1 main.go:299] handling current node
	I0812 12:50:51.894090       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:50:51.894117       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:50:51.894251       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:50:51.894280       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd] <==
	I0812 12:46:52.087318       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:02.093199       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:02.093256       1 main.go:299] handling current node
	I0812 12:47:02.093273       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:02.093279       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:02.093445       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:02.093470       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:12.092615       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:12.092723       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:12.092877       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:12.092904       1 main.go:299] handling current node
	I0812 12:47:12.092926       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:12.092941       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:22.094062       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:22.094110       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:22.094246       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:22.094271       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:22.094329       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:22.094351       1 main.go:299] handling current node
	I0812 12:47:32.092884       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:32.093254       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:32.093645       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:32.093685       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:32.093891       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:32.093927       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4] <==
	I0812 12:49:19.921391       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0812 12:49:20.003256       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 12:49:20.009503       1 aggregator.go:165] initial CRD sync complete...
	I0812 12:49:20.009564       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 12:49:20.009571       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 12:49:20.046677       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 12:49:20.048391       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 12:49:20.048473       1 policy_source.go:224] refreshing policies
	I0812 12:49:20.085846       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 12:49:20.087524       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 12:49:20.100670       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 12:49:20.101490       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 12:49:20.107053       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 12:49:20.107117       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 12:49:20.109659       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 12:49:20.117791       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 12:49:20.118202       1 cache.go:39] Caches are synced for autoregister controller
	I0812 12:49:20.903345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 12:49:22.064204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 12:49:22.189151       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 12:49:22.211940       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 12:49:22.282860       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 12:49:22.291226       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 12:49:32.366521       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:49:32.492328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c] <==
	I0812 12:47:40.892310       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0812 12:47:40.892351       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0812 12:47:40.892391       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0812 12:47:40.892441       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0812 12:47:40.892490       1 controller.go:129] Ending legacy_token_tracking_controller
	I0812 12:47:40.892520       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0812 12:47:40.892559       1 establishing_controller.go:87] Shutting down EstablishingController
	I0812 12:47:40.892611       1 naming_controller.go:302] Shutting down NamingConditionController
	I0812 12:47:40.892644       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0812 12:47:40.892679       1 controller.go:167] Shutting down OpenAPI controller
	I0812 12:47:40.892716       1 available_controller.go:439] Shutting down AvailableConditionController
	I0812 12:47:40.892747       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0812 12:47:40.892780       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0812 12:47:40.892806       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0812 12:47:40.892841       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0812 12:47:40.892982       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0812 12:47:40.893378       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:47:40.894137       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:47:40.897272       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0812 12:47:40.897583       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0812 12:47:40.897679       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0812 12:47:40.897789       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0812 12:47:40.897862       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0812 12:47:40.897890       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:47:40.903306       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	
	==> kube-controller-manager [1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554] <==
	I0812 12:49:33.084218       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0812 12:49:54.340033       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.783371ms"
	I0812 12:49:54.340115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.33µs"
	I0812 12:49:54.371819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.037232ms"
	I0812 12:49:54.410688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.811224ms"
	I0812 12:49:54.410797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.454µs"
	I0812 12:49:58.580183       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m02\" does not exist"
	I0812 12:49:58.593507       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m02" podCIDRs=["10.244.1.0/24"]
	I0812 12:50:00.468286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.574µs"
	I0812 12:50:00.483150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.044µs"
	I0812 12:50:00.496143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.417µs"
	I0812 12:50:00.540735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.72µs"
	I0812 12:50:00.548713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.132µs"
	I0812 12:50:00.551132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.652µs"
	I0812 12:50:02.906157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.392µs"
	I0812 12:50:18.269534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:18.294237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.072µs"
	I0812 12:50:18.307713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.967µs"
	I0812 12:50:21.878710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.482595ms"
	I0812 12:50:21.879219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.48µs"
	I0812 12:50:36.588285       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:37.754308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:50:37.757536       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:37.766302       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:50:57.405878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	
	
	==> kube-controller-manager [e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9] <==
	I0812 12:43:25.181697       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m02" podCIDRs=["10.244.1.0/24"]
	I0812 12:43:25.592088       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-276573-m02"
	I0812 12:43:45.300351       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:43:47.794785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.434645ms"
	I0812 12:43:47.810919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.021909ms"
	I0812 12:43:47.811637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.554µs"
	I0812 12:43:47.813361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.616µs"
	I0812 12:43:47.819499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.195µs"
	I0812 12:43:52.046423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.930069ms"
	I0812 12:43:52.047357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.938µs"
	I0812 12:43:52.515246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.127666ms"
	I0812 12:43:52.515907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.208µs"
	I0812 12:44:24.247313       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:44:24.250124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:44:24.261290       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:44:25.612377       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-276573-m03"
	I0812 12:44:45.241191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:14.017298       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:15.147392       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:15.147590       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:45:15.164677       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.3.0/24"]
	I0812 12:45:35.279848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:46:15.668189       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:46:20.762902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.711819ms"
	I0812 12:46:20.763031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.94µs"
	
	
	==> kube-proxy [129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c] <==
	I0812 12:42:37.626113       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:42:37.662339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0812 12:42:37.707404       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:42:37.707466       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:42:37.707483       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:42:37.711115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:42:37.711603       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:42:37.711722       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:42:37.713712       1 config.go:192] "Starting service config controller"
	I0812 12:42:37.713757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:42:37.713784       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:42:37.713805       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:42:37.715055       1 config.go:319] "Starting node config controller"
	I0812 12:42:37.715083       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:42:37.814748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:42:37.814796       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:42:37.815172       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1] <==
	I0812 12:49:20.878273       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:49:20.902769       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0812 12:49:20.983619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:49:20.986374       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:49:20.986436       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:49:20.993764       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:49:20.994201       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:49:20.994449       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:49:20.995759       1 config.go:192] "Starting service config controller"
	I0812 12:49:20.995910       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:49:20.996063       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:49:20.996144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:49:20.996705       1 config.go:319] "Starting node config controller"
	I0812 12:49:20.996768       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:49:21.096539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:49:21.096634       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:49:21.097097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510] <==
	W0812 12:42:21.115214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 12:42:21.115258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 12:42:21.174423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 12:42:21.174470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:42:21.214892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 12:42:21.214941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 12:42:21.297336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 12:42:21.297487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 12:42:21.328857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 12:42:21.328902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 12:42:21.333826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 12:42:21.333908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 12:42:21.436904       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:42:21.437072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:42:21.526348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 12:42:21.526474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 12:42:21.557640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:42:21.557744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:42:21.773292       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 12:42:21.773342       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 12:42:24.321082       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 12:47:40.880535       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0812 12:47:40.880672       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:47:40.881197       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0812 12:47:40.881801       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc] <==
	I0812 12:49:17.790751       1 serving.go:380] Generated self-signed cert in-memory
	W0812 12:49:19.951780       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 12:49:19.951907       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 12:49:19.951937       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 12:49:19.952019       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 12:49:19.992858       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 12:49:19.995120       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:49:19.996921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 12:49:19.998422       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 12:49:19.999012       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 12:49:19.998605       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 12:49:20.099131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:49:16 multinode-276573 kubelet[3099]: W0812 12:49:16.905132    3099 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.187:8443: connect: connection refused
	Aug 12 12:49:16 multinode-276573 kubelet[3099]: E0812 12:49:16.905220    3099 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.187:8443: connect: connection refused
	Aug 12 12:49:17 multinode-276573 kubelet[3099]: I0812 12:49:17.533679    3099 kubelet_node_status.go:73] "Attempting to register node" node="multinode-276573"
	Aug 12 12:49:19 multinode-276573 kubelet[3099]: I0812 12:49:19.999781    3099 apiserver.go:52] "Watching apiserver"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.039628    3099 topology_manager.go:215] "Topology Admit Handler" podUID="336c890e-36b0-41c8-adcb-c8ff7c9a84f6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x69zs"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.039785    3099 topology_manager.go:215] "Topology Admit Handler" podUID="214cf688-5730-4864-9796-d8f2f321cda3" podNamespace="kube-system" podName="kindnet-xmzhc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.039851    3099 topology_manager.go:215] "Topology Admit Handler" podUID="0ccc5f5f-1f74-4813-a584-05f8c760b5e5" podNamespace="kube-system" podName="kube-proxy-bhzlc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.039893    3099 topology_manager.go:215] "Topology Admit Handler" podUID="80784c3e-31fe-4aad-8f01-fd00ccdc0333" podNamespace="kube-system" podName="storage-provisioner"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.039999    3099 topology_manager.go:215] "Topology Admit Handler" podUID="1fd62a65-9720-4836-992e-94d373a6cd68" podNamespace="default" podName="busybox-fc5497c4f-9sww5"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.114552    3099 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139074    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ccc5f5f-1f74-4813-a584-05f8c760b5e5-lib-modules\") pod \"kube-proxy-bhzlc\" (UID: \"0ccc5f5f-1f74-4813-a584-05f8c760b5e5\") " pod="kube-system/kube-proxy-bhzlc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139268    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ccc5f5f-1f74-4813-a584-05f8c760b5e5-xtables-lock\") pod \"kube-proxy-bhzlc\" (UID: \"0ccc5f5f-1f74-4813-a584-05f8c760b5e5\") " pod="kube-system/kube-proxy-bhzlc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139394    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80784c3e-31fe-4aad-8f01-fd00ccdc0333-tmp\") pod \"storage-provisioner\" (UID: \"80784c3e-31fe-4aad-8f01-fd00ccdc0333\") " pod="kube-system/storage-provisioner"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139495    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/214cf688-5730-4864-9796-d8f2f321cda3-lib-modules\") pod \"kindnet-xmzhc\" (UID: \"214cf688-5730-4864-9796-d8f2f321cda3\") " pod="kube-system/kindnet-xmzhc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139604    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/214cf688-5730-4864-9796-d8f2f321cda3-cni-cfg\") pod \"kindnet-xmzhc\" (UID: \"214cf688-5730-4864-9796-d8f2f321cda3\") " pod="kube-system/kindnet-xmzhc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139650    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/214cf688-5730-4864-9796-d8f2f321cda3-xtables-lock\") pod \"kindnet-xmzhc\" (UID: \"214cf688-5730-4864-9796-d8f2f321cda3\") " pod="kube-system/kindnet-xmzhc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.143847    3099 kubelet_node_status.go:112] "Node was previously registered" node="multinode-276573"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.143924    3099 kubelet_node_status.go:76] "Successfully registered node" node="multinode-276573"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.145552    3099 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.146587    3099 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 12 12:50:16 multinode-276573 kubelet[3099]: E0812 12:50:16.091538    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 12:50:59.932874  505249 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19411-463103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-276573 -n multinode-276573
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-276573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-276573 stop: exit status 82 (2m0.489191658s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-276573-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-276573 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-276573 status: exit status 3 (18.776881164s)

                                                
                                                
-- stdout --
	multinode-276573
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276573-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 12:53:23.529504  505912 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0812 12:53:23.529539  505912 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-276573 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-276573 -n multinode-276573
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-276573 logs -n 25: (1.496337635s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573:/home/docker/cp-test_multinode-276573-m02_multinode-276573.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573 sudo cat                                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m02_multinode-276573.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03:/home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573-m03 sudo cat                                   | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp testdata/cp-test.txt                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573:/home/docker/cp-test_multinode-276573-m03_multinode-276573.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573 sudo cat                                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02:/home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573-m02 sudo cat                                   | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-276573 node stop m03                                                          | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	| node    | multinode-276573 node start                                                             | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| stop    | -p multinode-276573                                                                     | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| start   | -p multinode-276573                                                                     | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:47 UTC | 12 Aug 24 12:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:50 UTC |                     |
	| node    | multinode-276573 node delete                                                            | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:51 UTC | 12 Aug 24 12:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-276573 stop                                                                   | multinode-276573 | jenkins | v1.33.1 | 12 Aug 24 12:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 12:47:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 12:47:40.048744  504120 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:47:40.049033  504120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:47:40.049043  504120 out.go:304] Setting ErrFile to fd 2...
	I0812 12:47:40.049049  504120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:47:40.049309  504120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:47:40.049863  504120 out.go:298] Setting JSON to false
	I0812 12:47:40.050912  504120 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16191,"bootTime":1723450669,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:47:40.050979  504120 start.go:139] virtualization: kvm guest
	I0812 12:47:40.053375  504120 out.go:177] * [multinode-276573] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:47:40.054937  504120 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:47:40.054999  504120 notify.go:220] Checking for updates...
	I0812 12:47:40.058058  504120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:47:40.059638  504120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:47:40.061023  504120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:47:40.062224  504120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:47:40.063473  504120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:47:40.065310  504120 config.go:182] Loaded profile config "multinode-276573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:47:40.065414  504120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:47:40.065796  504120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:47:40.065851  504120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:47:40.081241  504120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0812 12:47:40.081747  504120 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:47:40.082322  504120 main.go:141] libmachine: Using API Version  1
	I0812 12:47:40.082343  504120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:47:40.082767  504120 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:47:40.082968  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.120461  504120 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 12:47:40.121946  504120 start.go:297] selected driver: kvm2
	I0812 12:47:40.121979  504120 start.go:901] validating driver "kvm2" against &{Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:47:40.122212  504120 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:47:40.122678  504120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:47:40.122800  504120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 12:47:40.138490  504120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 12:47:40.139239  504120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 12:47:40.139315  504120 cni.go:84] Creating CNI manager for ""
	I0812 12:47:40.139331  504120 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 12:47:40.139402  504120 start.go:340] cluster config:
	{Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:47:40.139565  504120 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 12:47:40.141929  504120 out.go:177] * Starting "multinode-276573" primary control-plane node in "multinode-276573" cluster
	I0812 12:47:40.143695  504120 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:47:40.143740  504120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 12:47:40.143753  504120 cache.go:56] Caching tarball of preloaded images
	I0812 12:47:40.143836  504120 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 12:47:40.143848  504120 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 12:47:40.143979  504120 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/config.json ...
	I0812 12:47:40.144194  504120 start.go:360] acquireMachinesLock for multinode-276573: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 12:47:40.144252  504120 start.go:364] duration metric: took 37.135µs to acquireMachinesLock for "multinode-276573"
	I0812 12:47:40.144273  504120 start.go:96] Skipping create...Using existing machine configuration
	I0812 12:47:40.144282  504120 fix.go:54] fixHost starting: 
	I0812 12:47:40.144561  504120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:47:40.144627  504120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:47:40.159477  504120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0812 12:47:40.159964  504120 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:47:40.160443  504120 main.go:141] libmachine: Using API Version  1
	I0812 12:47:40.160464  504120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:47:40.160858  504120 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:47:40.161036  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.161197  504120 main.go:141] libmachine: (multinode-276573) Calling .GetState
	I0812 12:47:40.162968  504120 fix.go:112] recreateIfNeeded on multinode-276573: state=Running err=<nil>
	W0812 12:47:40.162991  504120 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 12:47:40.165379  504120 out.go:177] * Updating the running kvm2 "multinode-276573" VM ...
	I0812 12:47:40.166961  504120 machine.go:94] provisionDockerMachine start ...
	I0812 12:47:40.166987  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:47:40.167217  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.169733  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.170267  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.170310  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.170437  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.170651  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.170833  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.170962  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.171144  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.171421  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.171438  504120 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 12:47:40.287045  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-276573
	
	I0812 12:47:40.287078  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.287363  504120 buildroot.go:166] provisioning hostname "multinode-276573"
	I0812 12:47:40.287391  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.287631  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.290708  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.291099  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.291139  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.291349  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.291596  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.291766  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.291895  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.292093  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.292271  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.292286  504120 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-276573 && echo "multinode-276573" | sudo tee /etc/hostname
	I0812 12:47:40.414263  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-276573
	
	I0812 12:47:40.414307  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.417551  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.417990  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.418027  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.418251  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.418480  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.418792  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.419002  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.419213  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.419407  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.419430  504120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-276573' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-276573/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-276573' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 12:47:40.526493  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 12:47:40.526524  504120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 12:47:40.526545  504120 buildroot.go:174] setting up certificates
	I0812 12:47:40.526554  504120 provision.go:84] configureAuth start
	I0812 12:47:40.526563  504120 main.go:141] libmachine: (multinode-276573) Calling .GetMachineName
	I0812 12:47:40.526968  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:47:40.529836  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.530233  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.530255  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.530419  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.532806  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.533244  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.533276  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.533436  504120 provision.go:143] copyHostCerts
	I0812 12:47:40.533484  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:47:40.533539  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 12:47:40.533552  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 12:47:40.533636  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 12:47:40.533793  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:47:40.533824  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 12:47:40.533832  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 12:47:40.533881  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 12:47:40.533968  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:47:40.533991  504120 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 12:47:40.533998  504120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 12:47:40.534038  504120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 12:47:40.534121  504120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.multinode-276573 san=[127.0.0.1 192.168.39.187 localhost minikube multinode-276573]
	I0812 12:47:40.585907  504120 provision.go:177] copyRemoteCerts
	I0812 12:47:40.585987  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 12:47:40.586021  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.588986  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.589374  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.589395  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.589582  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.589820  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.589972  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.590099  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:47:40.673045  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0812 12:47:40.673138  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 12:47:40.699555  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0812 12:47:40.699644  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0812 12:47:40.725109  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0812 12:47:40.725202  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 12:47:40.750514  504120 provision.go:87] duration metric: took 223.943514ms to configureAuth
	I0812 12:47:40.750550  504120 buildroot.go:189] setting minikube options for container-runtime
	I0812 12:47:40.750783  504120 config.go:182] Loaded profile config "multinode-276573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:47:40.750862  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:47:40.753798  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.754241  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:47:40.754267  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:47:40.754495  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:47:40.754695  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.754887  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:47:40.755027  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:47:40.755166  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:47:40.755343  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:47:40.755357  504120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 12:49:11.530675  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 12:49:11.530730  504120 machine.go:97] duration metric: took 1m31.363750407s to provisionDockerMachine
	I0812 12:49:11.530748  504120 start.go:293] postStartSetup for "multinode-276573" (driver="kvm2")
	I0812 12:49:11.530761  504120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 12:49:11.530833  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.531215  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 12:49:11.531247  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.534668  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.535140  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.535172  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.535364  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.535577  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.535744  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.535916  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.620902  504120 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 12:49:11.625459  504120 command_runner.go:130] > NAME=Buildroot
	I0812 12:49:11.625483  504120 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0812 12:49:11.625488  504120 command_runner.go:130] > ID=buildroot
	I0812 12:49:11.625495  504120 command_runner.go:130] > VERSION_ID=2023.02.9
	I0812 12:49:11.625502  504120 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0812 12:49:11.625597  504120 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 12:49:11.625616  504120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 12:49:11.625691  504120 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 12:49:11.625764  504120 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 12:49:11.625775  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /etc/ssl/certs/4703752.pem
	I0812 12:49:11.625870  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 12:49:11.636217  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:49:11.660891  504120 start.go:296] duration metric: took 130.125892ms for postStartSetup
	I0812 12:49:11.660939  504120 fix.go:56] duration metric: took 1m31.516658672s for fixHost
	I0812 12:49:11.660974  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.663806  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.664396  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.664420  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.664639  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.664868  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.665007  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.665219  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.665430  504120 main.go:141] libmachine: Using SSH client type: native
	I0812 12:49:11.665607  504120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0812 12:49:11.665617  504120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 12:49:11.770234  504120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723466951.742527319
	
	I0812 12:49:11.770270  504120 fix.go:216] guest clock: 1723466951.742527319
	I0812 12:49:11.770282  504120 fix.go:229] Guest: 2024-08-12 12:49:11.742527319 +0000 UTC Remote: 2024-08-12 12:49:11.660949606 +0000 UTC m=+91.650205786 (delta=81.577713ms)
	I0812 12:49:11.770328  504120 fix.go:200] guest clock delta is within tolerance: 81.577713ms
	I0812 12:49:11.770338  504120 start.go:83] releasing machines lock for "multinode-276573", held for 1m31.626073217s
	I0812 12:49:11.770368  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.770684  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:49:11.773602  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.774005  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.774032  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.774248  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.774843  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.775057  504120 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:49:11.775165  504120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 12:49:11.775206  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.775312  504120 ssh_runner.go:195] Run: cat /version.json
	I0812 12:49:11.775342  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:49:11.778235  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778262  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778627  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.778656  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778683  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:11.778704  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:11.778756  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.778985  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.779026  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:49:11.779189  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:49:11.779191  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.779319  504120 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:49:11.779469  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.779491  504120 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:49:11.854108  504120 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0812 12:49:11.854495  504120 ssh_runner.go:195] Run: systemctl --version
	I0812 12:49:11.877824  504120 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0812 12:49:11.878466  504120 command_runner.go:130] > systemd 252 (252)
	I0812 12:49:11.878509  504120 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0812 12:49:11.878590  504120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 12:49:12.041662  504120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0812 12:49:12.050138  504120 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0812 12:49:12.050443  504120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 12:49:12.050523  504120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 12:49:12.060162  504120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0812 12:49:12.060188  504120 start.go:495] detecting cgroup driver to use...
	I0812 12:49:12.060252  504120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 12:49:12.077562  504120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 12:49:12.091967  504120 docker.go:217] disabling cri-docker service (if available) ...
	I0812 12:49:12.092045  504120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 12:49:12.106024  504120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 12:49:12.120311  504120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 12:49:12.268894  504120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 12:49:12.419018  504120 docker.go:233] disabling docker service ...
	I0812 12:49:12.419103  504120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 12:49:12.437657  504120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 12:49:12.451691  504120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 12:49:12.599201  504120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 12:49:12.739926  504120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 12:49:12.756616  504120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 12:49:12.777552  504120 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0812 12:49:12.778110  504120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0812 12:49:12.778187  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.790360  504120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 12:49:12.790446  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.801975  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.813550  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.824850  504120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 12:49:12.836662  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.848376  504120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.861217  504120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 12:49:12.872749  504120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 12:49:12.882337  504120 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0812 12:49:12.882522  504120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 12:49:12.893119  504120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:49:13.031098  504120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 12:49:13.306209  504120 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 12:49:13.306303  504120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 12:49:13.311596  504120 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0812 12:49:13.311619  504120 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0812 12:49:13.311627  504120 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0812 12:49:13.311635  504120 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 12:49:13.311639  504120 command_runner.go:130] > Access: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311646  504120 command_runner.go:130] > Modify: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311651  504120 command_runner.go:130] > Change: 2024-08-12 12:49:13.149342033 +0000
	I0812 12:49:13.311654  504120 command_runner.go:130] >  Birth: -
	I0812 12:49:13.311671  504120 start.go:563] Will wait 60s for crictl version
	I0812 12:49:13.311726  504120 ssh_runner.go:195] Run: which crictl
	I0812 12:49:13.316869  504120 command_runner.go:130] > /usr/bin/crictl
	I0812 12:49:13.316938  504120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 12:49:13.354320  504120 command_runner.go:130] > Version:  0.1.0
	I0812 12:49:13.354348  504120 command_runner.go:130] > RuntimeName:  cri-o
	I0812 12:49:13.354353  504120 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0812 12:49:13.354379  504120 command_runner.go:130] > RuntimeApiVersion:  v1
	I0812 12:49:13.355814  504120 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 12:49:13.355905  504120 ssh_runner.go:195] Run: crio --version
	I0812 12:49:13.385250  504120 command_runner.go:130] > crio version 1.29.1
	I0812 12:49:13.385275  504120 command_runner.go:130] > Version:        1.29.1
	I0812 12:49:13.385281  504120 command_runner.go:130] > GitCommit:      unknown
	I0812 12:49:13.385286  504120 command_runner.go:130] > GitCommitDate:  unknown
	I0812 12:49:13.385290  504120 command_runner.go:130] > GitTreeState:   clean
	I0812 12:49:13.385295  504120 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 12:49:13.385299  504120 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 12:49:13.385303  504120 command_runner.go:130] > Compiler:       gc
	I0812 12:49:13.385307  504120 command_runner.go:130] > Platform:       linux/amd64
	I0812 12:49:13.385312  504120 command_runner.go:130] > Linkmode:       dynamic
	I0812 12:49:13.385324  504120 command_runner.go:130] > BuildTags:      
	I0812 12:49:13.385331  504120 command_runner.go:130] >   containers_image_ostree_stub
	I0812 12:49:13.385337  504120 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 12:49:13.385347  504120 command_runner.go:130] >   btrfs_noversion
	I0812 12:49:13.385355  504120 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 12:49:13.385364  504120 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 12:49:13.385368  504120 command_runner.go:130] >   seccomp
	I0812 12:49:13.385373  504120 command_runner.go:130] > LDFlags:          unknown
	I0812 12:49:13.385377  504120 command_runner.go:130] > SeccompEnabled:   true
	I0812 12:49:13.385381  504120 command_runner.go:130] > AppArmorEnabled:  false
	I0812 12:49:13.386617  504120 ssh_runner.go:195] Run: crio --version
	I0812 12:49:13.417304  504120 command_runner.go:130] > crio version 1.29.1
	I0812 12:49:13.417337  504120 command_runner.go:130] > Version:        1.29.1
	I0812 12:49:13.417347  504120 command_runner.go:130] > GitCommit:      unknown
	I0812 12:49:13.417354  504120 command_runner.go:130] > GitCommitDate:  unknown
	I0812 12:49:13.417360  504120 command_runner.go:130] > GitTreeState:   clean
	I0812 12:49:13.417367  504120 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0812 12:49:13.417371  504120 command_runner.go:130] > GoVersion:      go1.21.6
	I0812 12:49:13.417375  504120 command_runner.go:130] > Compiler:       gc
	I0812 12:49:13.417380  504120 command_runner.go:130] > Platform:       linux/amd64
	I0812 12:49:13.417384  504120 command_runner.go:130] > Linkmode:       dynamic
	I0812 12:49:13.417388  504120 command_runner.go:130] > BuildTags:      
	I0812 12:49:13.417392  504120 command_runner.go:130] >   containers_image_ostree_stub
	I0812 12:49:13.417397  504120 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0812 12:49:13.417401  504120 command_runner.go:130] >   btrfs_noversion
	I0812 12:49:13.417406  504120 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0812 12:49:13.417410  504120 command_runner.go:130] >   libdm_no_deferred_remove
	I0812 12:49:13.417413  504120 command_runner.go:130] >   seccomp
	I0812 12:49:13.417418  504120 command_runner.go:130] > LDFlags:          unknown
	I0812 12:49:13.417422  504120 command_runner.go:130] > SeccompEnabled:   true
	I0812 12:49:13.417427  504120 command_runner.go:130] > AppArmorEnabled:  false
	I0812 12:49:13.420561  504120 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0812 12:49:13.422003  504120 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:49:13.425319  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:13.425739  504120 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:49:13.425782  504120 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:49:13.426061  504120 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 12:49:13.431063  504120 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0812 12:49:13.431180  504120 kubeadm.go:883] updating cluster {Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 12:49:13.431335  504120 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 12:49:13.431380  504120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:49:13.484804  504120 command_runner.go:130] > {
	I0812 12:49:13.484829  504120 command_runner.go:130] >   "images": [
	I0812 12:49:13.484833  504120 command_runner.go:130] >     {
	I0812 12:49:13.484840  504120 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 12:49:13.484845  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.484851  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 12:49:13.484854  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484858  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.484870  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 12:49:13.484877  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 12:49:13.484880  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484885  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.484888  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.484892  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.484898  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.484908  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.484912  504120 command_runner.go:130] >     },
	I0812 12:49:13.484916  504120 command_runner.go:130] >     {
	I0812 12:49:13.484921  504120 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 12:49:13.484930  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.484935  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 12:49:13.484941  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484945  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.484951  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 12:49:13.484960  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 12:49:13.484964  504120 command_runner.go:130] >       ],
	I0812 12:49:13.484968  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.484972  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.484981  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.484989  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.484992  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.484996  504120 command_runner.go:130] >     },
	I0812 12:49:13.484999  504120 command_runner.go:130] >     {
	I0812 12:49:13.485004  504120 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 12:49:13.485008  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485013  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 12:49:13.485016  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485020  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485029  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 12:49:13.485038  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 12:49:13.485042  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485047  504120 command_runner.go:130] >       "size": "1363676",
	I0812 12:49:13.485053  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485057  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485063  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485067  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485073  504120 command_runner.go:130] >     },
	I0812 12:49:13.485076  504120 command_runner.go:130] >     {
	I0812 12:49:13.485096  504120 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 12:49:13.485100  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485105  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 12:49:13.485113  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485120  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485127  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 12:49:13.485145  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 12:49:13.485151  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485156  504120 command_runner.go:130] >       "size": "31470524",
	I0812 12:49:13.485161  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485165  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485171  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485175  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485178  504120 command_runner.go:130] >     },
	I0812 12:49:13.485182  504120 command_runner.go:130] >     {
	I0812 12:49:13.485188  504120 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 12:49:13.485195  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485199  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 12:49:13.485203  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485207  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485214  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 12:49:13.485222  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 12:49:13.485226  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485230  504120 command_runner.go:130] >       "size": "61245718",
	I0812 12:49:13.485235  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485239  504120 command_runner.go:130] >       "username": "nonroot",
	I0812 12:49:13.485244  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485248  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485251  504120 command_runner.go:130] >     },
	I0812 12:49:13.485257  504120 command_runner.go:130] >     {
	I0812 12:49:13.485262  504120 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 12:49:13.485268  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485273  504120 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 12:49:13.485279  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485282  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485289  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 12:49:13.485298  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 12:49:13.485301  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485305  504120 command_runner.go:130] >       "size": "150779692",
	I0812 12:49:13.485316  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485322  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485326  504120 command_runner.go:130] >       },
	I0812 12:49:13.485332  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485336  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485342  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485346  504120 command_runner.go:130] >     },
	I0812 12:49:13.485352  504120 command_runner.go:130] >     {
	I0812 12:49:13.485358  504120 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 12:49:13.485364  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485369  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 12:49:13.485375  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485379  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485388  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 12:49:13.485397  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 12:49:13.485400  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485406  504120 command_runner.go:130] >       "size": "117609954",
	I0812 12:49:13.485409  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485415  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485419  504120 command_runner.go:130] >       },
	I0812 12:49:13.485425  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485429  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485435  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485438  504120 command_runner.go:130] >     },
	I0812 12:49:13.485443  504120 command_runner.go:130] >     {
	I0812 12:49:13.485449  504120 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 12:49:13.485455  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485460  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 12:49:13.485466  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485470  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485511  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 12:49:13.485522  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 12:49:13.485525  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485529  504120 command_runner.go:130] >       "size": "112198984",
	I0812 12:49:13.485532  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485542  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485552  504120 command_runner.go:130] >       },
	I0812 12:49:13.485556  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485560  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485563  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485567  504120 command_runner.go:130] >     },
	I0812 12:49:13.485570  504120 command_runner.go:130] >     {
	I0812 12:49:13.485575  504120 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 12:49:13.485586  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485590  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 12:49:13.485593  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485597  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485603  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 12:49:13.485610  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 12:49:13.485613  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485617  504120 command_runner.go:130] >       "size": "85953945",
	I0812 12:49:13.485621  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.485624  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485627  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485631  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485634  504120 command_runner.go:130] >     },
	I0812 12:49:13.485637  504120 command_runner.go:130] >     {
	I0812 12:49:13.485642  504120 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 12:49:13.485646  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485660  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 12:49:13.485663  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485667  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485674  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 12:49:13.485683  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 12:49:13.485687  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485691  504120 command_runner.go:130] >       "size": "63051080",
	I0812 12:49:13.485694  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485698  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.485702  504120 command_runner.go:130] >       },
	I0812 12:49:13.485708  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485712  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485716  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.485724  504120 command_runner.go:130] >     },
	I0812 12:49:13.485730  504120 command_runner.go:130] >     {
	I0812 12:49:13.485736  504120 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 12:49:13.485739  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.485744  504120 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 12:49:13.485747  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485757  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.485766  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 12:49:13.485776  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 12:49:13.485781  504120 command_runner.go:130] >       ],
	I0812 12:49:13.485785  504120 command_runner.go:130] >       "size": "750414",
	I0812 12:49:13.485788  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.485792  504120 command_runner.go:130] >         "value": "65535"
	I0812 12:49:13.485796  504120 command_runner.go:130] >       },
	I0812 12:49:13.485800  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.485803  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.485807  504120 command_runner.go:130] >       "pinned": true
	I0812 12:49:13.485810  504120 command_runner.go:130] >     }
	I0812 12:49:13.485813  504120 command_runner.go:130] >   ]
	I0812 12:49:13.485816  504120 command_runner.go:130] > }
	I0812 12:49:13.486227  504120 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:49:13.486252  504120 crio.go:433] Images already preloaded, skipping extraction
	I0812 12:49:13.486305  504120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 12:49:13.531044  504120 command_runner.go:130] > {
	I0812 12:49:13.531072  504120 command_runner.go:130] >   "images": [
	I0812 12:49:13.531077  504120 command_runner.go:130] >     {
	I0812 12:49:13.531085  504120 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0812 12:49:13.531091  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531105  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0812 12:49:13.531109  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531113  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531124  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0812 12:49:13.531131  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0812 12:49:13.531137  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531144  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.531148  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531153  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531159  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531163  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531166  504120 command_runner.go:130] >     },
	I0812 12:49:13.531169  504120 command_runner.go:130] >     {
	I0812 12:49:13.531175  504120 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0812 12:49:13.531183  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531189  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0812 12:49:13.531193  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531197  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531204  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0812 12:49:13.531213  504120 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0812 12:49:13.531217  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531220  504120 command_runner.go:130] >       "size": "87165492",
	I0812 12:49:13.531224  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531234  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531237  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531242  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531253  504120 command_runner.go:130] >     },
	I0812 12:49:13.531260  504120 command_runner.go:130] >     {
	I0812 12:49:13.531266  504120 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0812 12:49:13.531270  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531276  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0812 12:49:13.531279  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531283  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531292  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0812 12:49:13.531299  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0812 12:49:13.531305  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531308  504120 command_runner.go:130] >       "size": "1363676",
	I0812 12:49:13.531313  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531317  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531321  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531325  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531331  504120 command_runner.go:130] >     },
	I0812 12:49:13.531334  504120 command_runner.go:130] >     {
	I0812 12:49:13.531340  504120 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0812 12:49:13.531345  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531350  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0812 12:49:13.531353  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531357  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531364  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0812 12:49:13.531381  504120 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0812 12:49:13.531387  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531392  504120 command_runner.go:130] >       "size": "31470524",
	I0812 12:49:13.531398  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531402  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531406  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531412  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531416  504120 command_runner.go:130] >     },
	I0812 12:49:13.531420  504120 command_runner.go:130] >     {
	I0812 12:49:13.531426  504120 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0812 12:49:13.531432  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531439  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0812 12:49:13.531447  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531460  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531469  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0812 12:49:13.531476  504120 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0812 12:49:13.531482  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531486  504120 command_runner.go:130] >       "size": "61245718",
	I0812 12:49:13.531495  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531502  504120 command_runner.go:130] >       "username": "nonroot",
	I0812 12:49:13.531506  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531509  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531513  504120 command_runner.go:130] >     },
	I0812 12:49:13.531517  504120 command_runner.go:130] >     {
	I0812 12:49:13.531522  504120 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0812 12:49:13.531528  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531533  504120 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0812 12:49:13.531536  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531540  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531549  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0812 12:49:13.531558  504120 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0812 12:49:13.531562  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531567  504120 command_runner.go:130] >       "size": "150779692",
	I0812 12:49:13.531572  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531576  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531582  504120 command_runner.go:130] >       },
	I0812 12:49:13.531586  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531592  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531596  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531600  504120 command_runner.go:130] >     },
	I0812 12:49:13.531605  504120 command_runner.go:130] >     {
	I0812 12:49:13.531611  504120 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0812 12:49:13.531618  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531624  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0812 12:49:13.531629  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531633  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531662  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0812 12:49:13.531676  504120 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0812 12:49:13.531679  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531694  504120 command_runner.go:130] >       "size": "117609954",
	I0812 12:49:13.531698  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531702  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531705  504120 command_runner.go:130] >       },
	I0812 12:49:13.531708  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531712  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531716  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531719  504120 command_runner.go:130] >     },
	I0812 12:49:13.531722  504120 command_runner.go:130] >     {
	I0812 12:49:13.531727  504120 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0812 12:49:13.531731  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531736  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0812 12:49:13.531740  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531743  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531766  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0812 12:49:13.531775  504120 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0812 12:49:13.531781  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531785  504120 command_runner.go:130] >       "size": "112198984",
	I0812 12:49:13.531791  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531795  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531801  504120 command_runner.go:130] >       },
	I0812 12:49:13.531805  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531811  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531815  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531820  504120 command_runner.go:130] >     },
	I0812 12:49:13.531823  504120 command_runner.go:130] >     {
	I0812 12:49:13.531832  504120 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0812 12:49:13.531838  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531843  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0812 12:49:13.531848  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531852  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531858  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0812 12:49:13.531871  504120 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0812 12:49:13.531877  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531883  504120 command_runner.go:130] >       "size": "85953945",
	I0812 12:49:13.531889  504120 command_runner.go:130] >       "uid": null,
	I0812 12:49:13.531898  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531905  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.531909  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.531914  504120 command_runner.go:130] >     },
	I0812 12:49:13.531918  504120 command_runner.go:130] >     {
	I0812 12:49:13.531926  504120 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0812 12:49:13.531932  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.531937  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0812 12:49:13.531941  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531945  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.531952  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0812 12:49:13.531962  504120 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0812 12:49:13.531966  504120 command_runner.go:130] >       ],
	I0812 12:49:13.531971  504120 command_runner.go:130] >       "size": "63051080",
	I0812 12:49:13.531974  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.531978  504120 command_runner.go:130] >         "value": "0"
	I0812 12:49:13.531982  504120 command_runner.go:130] >       },
	I0812 12:49:13.531993  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.531999  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.532003  504120 command_runner.go:130] >       "pinned": false
	I0812 12:49:13.532006  504120 command_runner.go:130] >     },
	I0812 12:49:13.532009  504120 command_runner.go:130] >     {
	I0812 12:49:13.532015  504120 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0812 12:49:13.532022  504120 command_runner.go:130] >       "repoTags": [
	I0812 12:49:13.532028  504120 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0812 12:49:13.532034  504120 command_runner.go:130] >       ],
	I0812 12:49:13.532040  504120 command_runner.go:130] >       "repoDigests": [
	I0812 12:49:13.532051  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0812 12:49:13.532063  504120 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0812 12:49:13.532071  504120 command_runner.go:130] >       ],
	I0812 12:49:13.532077  504120 command_runner.go:130] >       "size": "750414",
	I0812 12:49:13.532085  504120 command_runner.go:130] >       "uid": {
	I0812 12:49:13.532092  504120 command_runner.go:130] >         "value": "65535"
	I0812 12:49:13.532100  504120 command_runner.go:130] >       },
	I0812 12:49:13.532104  504120 command_runner.go:130] >       "username": "",
	I0812 12:49:13.532111  504120 command_runner.go:130] >       "spec": null,
	I0812 12:49:13.532120  504120 command_runner.go:130] >       "pinned": true
	I0812 12:49:13.532126  504120 command_runner.go:130] >     }
	I0812 12:49:13.532129  504120 command_runner.go:130] >   ]
	I0812 12:49:13.532132  504120 command_runner.go:130] > }
	I0812 12:49:13.532453  504120 crio.go:514] all images are preloaded for cri-o runtime.
	I0812 12:49:13.532472  504120 cache_images.go:84] Images are preloaded, skipping loading
	I0812 12:49:13.532480  504120 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.30.3 crio true true} ...
	I0812 12:49:13.532621  504120 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-276573 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 12:49:13.532693  504120 ssh_runner.go:195] Run: crio config
	I0812 12:49:13.572341  504120 command_runner.go:130] ! time="2024-08-12 12:49:13.544681131Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0812 12:49:13.578206  504120 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0812 12:49:13.585313  504120 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0812 12:49:13.585341  504120 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0812 12:49:13.585350  504120 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0812 12:49:13.585354  504120 command_runner.go:130] > #
	I0812 12:49:13.585364  504120 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0812 12:49:13.585374  504120 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0812 12:49:13.585383  504120 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0812 12:49:13.585405  504120 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0812 12:49:13.585414  504120 command_runner.go:130] > # reload'.
	I0812 12:49:13.585425  504120 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0812 12:49:13.585438  504120 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0812 12:49:13.585451  504120 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0812 12:49:13.585464  504120 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0812 12:49:13.585473  504120 command_runner.go:130] > [crio]
	I0812 12:49:13.585483  504120 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0812 12:49:13.585495  504120 command_runner.go:130] > # containers images, in this directory.
	I0812 12:49:13.585505  504120 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0812 12:49:13.585522  504120 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0812 12:49:13.585533  504120 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0812 12:49:13.585551  504120 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0812 12:49:13.585560  504120 command_runner.go:130] > # imagestore = ""
	I0812 12:49:13.585571  504120 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0812 12:49:13.585584  504120 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0812 12:49:13.585592  504120 command_runner.go:130] > storage_driver = "overlay"
	I0812 12:49:13.585605  504120 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0812 12:49:13.585617  504120 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0812 12:49:13.585642  504120 command_runner.go:130] > storage_option = [
	I0812 12:49:13.585653  504120 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0812 12:49:13.585660  504120 command_runner.go:130] > ]
	I0812 12:49:13.585672  504120 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0812 12:49:13.585685  504120 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0812 12:49:13.585696  504120 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0812 12:49:13.585708  504120 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0812 12:49:13.585719  504120 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0812 12:49:13.585729  504120 command_runner.go:130] > # always happen on a node reboot
	I0812 12:49:13.585740  504120 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0812 12:49:13.585762  504120 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0812 12:49:13.585779  504120 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0812 12:49:13.585790  504120 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0812 12:49:13.585799  504120 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0812 12:49:13.585812  504120 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0812 12:49:13.585827  504120 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0812 12:49:13.585836  504120 command_runner.go:130] > # internal_wipe = true
	I0812 12:49:13.585850  504120 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0812 12:49:13.585862  504120 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0812 12:49:13.585872  504120 command_runner.go:130] > # internal_repair = false
	I0812 12:49:13.585883  504120 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0812 12:49:13.585896  504120 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0812 12:49:13.585909  504120 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0812 12:49:13.585920  504120 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0812 12:49:13.585930  504120 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0812 12:49:13.585939  504120 command_runner.go:130] > [crio.api]
	I0812 12:49:13.585948  504120 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0812 12:49:13.585959  504120 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0812 12:49:13.585975  504120 command_runner.go:130] > # IP address on which the stream server will listen.
	I0812 12:49:13.585985  504120 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0812 12:49:13.585998  504120 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0812 12:49:13.586009  504120 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0812 12:49:13.586017  504120 command_runner.go:130] > # stream_port = "0"
	I0812 12:49:13.586029  504120 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0812 12:49:13.586037  504120 command_runner.go:130] > # stream_enable_tls = false
	I0812 12:49:13.586049  504120 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0812 12:49:13.586065  504120 command_runner.go:130] > # stream_idle_timeout = ""
	I0812 12:49:13.586078  504120 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0812 12:49:13.586089  504120 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0812 12:49:13.586099  504120 command_runner.go:130] > # minutes.
	I0812 12:49:13.586109  504120 command_runner.go:130] > # stream_tls_cert = ""
	I0812 12:49:13.586120  504120 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0812 12:49:13.586133  504120 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0812 12:49:13.586142  504120 command_runner.go:130] > # stream_tls_key = ""
	I0812 12:49:13.586152  504120 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0812 12:49:13.586164  504120 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0812 12:49:13.586203  504120 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0812 12:49:13.586213  504120 command_runner.go:130] > # stream_tls_ca = ""
	I0812 12:49:13.586225  504120 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 12:49:13.586234  504120 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0812 12:49:13.586248  504120 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0812 12:49:13.586259  504120 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0812 12:49:13.586271  504120 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0812 12:49:13.586283  504120 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0812 12:49:13.586291  504120 command_runner.go:130] > [crio.runtime]
	I0812 12:49:13.586302  504120 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0812 12:49:13.586314  504120 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0812 12:49:13.586323  504120 command_runner.go:130] > # "nofile=1024:2048"
	I0812 12:49:13.586334  504120 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0812 12:49:13.586353  504120 command_runner.go:130] > # default_ulimits = [
	I0812 12:49:13.586362  504120 command_runner.go:130] > # ]
	I0812 12:49:13.586373  504120 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0812 12:49:13.586382  504120 command_runner.go:130] > # no_pivot = false
	I0812 12:49:13.586392  504120 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0812 12:49:13.586405  504120 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0812 12:49:13.586415  504120 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0812 12:49:13.586428  504120 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0812 12:49:13.586439  504120 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0812 12:49:13.586453  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 12:49:13.586464  504120 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0812 12:49:13.586474  504120 command_runner.go:130] > # Cgroup setting for conmon
	I0812 12:49:13.586488  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0812 12:49:13.586512  504120 command_runner.go:130] > conmon_cgroup = "pod"
	I0812 12:49:13.586525  504120 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0812 12:49:13.586535  504120 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0812 12:49:13.586560  504120 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0812 12:49:13.586569  504120 command_runner.go:130] > conmon_env = [
	I0812 12:49:13.586579  504120 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 12:49:13.586586  504120 command_runner.go:130] > ]
	I0812 12:49:13.586595  504120 command_runner.go:130] > # Additional environment variables to set for all the
	I0812 12:49:13.586606  504120 command_runner.go:130] > # containers. These are overridden if set in the
	I0812 12:49:13.586616  504120 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0812 12:49:13.586626  504120 command_runner.go:130] > # default_env = [
	I0812 12:49:13.586633  504120 command_runner.go:130] > # ]
	I0812 12:49:13.586643  504120 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0812 12:49:13.586659  504120 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0812 12:49:13.586668  504120 command_runner.go:130] > # selinux = false
	I0812 12:49:13.586678  504120 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0812 12:49:13.586690  504120 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0812 12:49:13.586699  504120 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0812 12:49:13.586709  504120 command_runner.go:130] > # seccomp_profile = ""
	I0812 12:49:13.586720  504120 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0812 12:49:13.586732  504120 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0812 12:49:13.586745  504120 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0812 12:49:13.586756  504120 command_runner.go:130] > # which might increase security.
	I0812 12:49:13.586765  504120 command_runner.go:130] > # This option is currently deprecated,
	I0812 12:49:13.586777  504120 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0812 12:49:13.586786  504120 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0812 12:49:13.586798  504120 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0812 12:49:13.586809  504120 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0812 12:49:13.586822  504120 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0812 12:49:13.586835  504120 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0812 12:49:13.586847  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.586857  504120 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0812 12:49:13.586869  504120 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0812 12:49:13.586879  504120 command_runner.go:130] > # the cgroup blockio controller.
	I0812 12:49:13.586889  504120 command_runner.go:130] > # blockio_config_file = ""
	I0812 12:49:13.586903  504120 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0812 12:49:13.586919  504120 command_runner.go:130] > # blockio parameters.
	I0812 12:49:13.586929  504120 command_runner.go:130] > # blockio_reload = false
	I0812 12:49:13.586941  504120 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0812 12:49:13.586950  504120 command_runner.go:130] > # irqbalance daemon.
	I0812 12:49:13.586961  504120 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0812 12:49:13.586973  504120 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0812 12:49:13.586991  504120 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0812 12:49:13.587005  504120 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0812 12:49:13.587018  504120 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0812 12:49:13.587030  504120 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0812 12:49:13.587040  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.587050  504120 command_runner.go:130] > # rdt_config_file = ""
	I0812 12:49:13.587062  504120 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0812 12:49:13.587070  504120 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0812 12:49:13.587115  504120 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0812 12:49:13.587125  504120 command_runner.go:130] > # separate_pull_cgroup = ""
	I0812 12:49:13.587135  504120 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0812 12:49:13.587148  504120 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0812 12:49:13.587157  504120 command_runner.go:130] > # will be added.
	I0812 12:49:13.587164  504120 command_runner.go:130] > # default_capabilities = [
	I0812 12:49:13.587173  504120 command_runner.go:130] > # 	"CHOWN",
	I0812 12:49:13.587181  504120 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0812 12:49:13.587190  504120 command_runner.go:130] > # 	"FSETID",
	I0812 12:49:13.587197  504120 command_runner.go:130] > # 	"FOWNER",
	I0812 12:49:13.587205  504120 command_runner.go:130] > # 	"SETGID",
	I0812 12:49:13.587212  504120 command_runner.go:130] > # 	"SETUID",
	I0812 12:49:13.587221  504120 command_runner.go:130] > # 	"SETPCAP",
	I0812 12:49:13.587228  504120 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0812 12:49:13.587237  504120 command_runner.go:130] > # 	"KILL",
	I0812 12:49:13.587244  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587256  504120 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0812 12:49:13.587270  504120 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0812 12:49:13.587281  504120 command_runner.go:130] > # add_inheritable_capabilities = false
	I0812 12:49:13.587293  504120 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0812 12:49:13.587305  504120 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 12:49:13.587314  504120 command_runner.go:130] > default_sysctls = [
	I0812 12:49:13.587329  504120 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0812 12:49:13.587338  504120 command_runner.go:130] > ]
	I0812 12:49:13.587346  504120 command_runner.go:130] > # List of devices on the host that a
	I0812 12:49:13.587359  504120 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0812 12:49:13.587369  504120 command_runner.go:130] > # allowed_devices = [
	I0812 12:49:13.587377  504120 command_runner.go:130] > # 	"/dev/fuse",
	I0812 12:49:13.587383  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587392  504120 command_runner.go:130] > # List of additional devices. specified as
	I0812 12:49:13.587406  504120 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0812 12:49:13.587418  504120 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0812 12:49:13.587430  504120 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0812 12:49:13.587441  504120 command_runner.go:130] > # additional_devices = [
	I0812 12:49:13.587448  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587458  504120 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0812 12:49:13.587468  504120 command_runner.go:130] > # cdi_spec_dirs = [
	I0812 12:49:13.587476  504120 command_runner.go:130] > # 	"/etc/cdi",
	I0812 12:49:13.587484  504120 command_runner.go:130] > # 	"/var/run/cdi",
	I0812 12:49:13.587489  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587500  504120 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0812 12:49:13.587513  504120 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0812 12:49:13.587522  504120 command_runner.go:130] > # Defaults to false.
	I0812 12:49:13.587532  504120 command_runner.go:130] > # device_ownership_from_security_context = false
	I0812 12:49:13.587550  504120 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0812 12:49:13.587562  504120 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0812 12:49:13.587571  504120 command_runner.go:130] > # hooks_dir = [
	I0812 12:49:13.587586  504120 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0812 12:49:13.587594  504120 command_runner.go:130] > # ]
	I0812 12:49:13.587605  504120 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0812 12:49:13.587619  504120 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0812 12:49:13.587631  504120 command_runner.go:130] > # its default mounts from the following two files:
	I0812 12:49:13.587639  504120 command_runner.go:130] > #
	I0812 12:49:13.587650  504120 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0812 12:49:13.587662  504120 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0812 12:49:13.587672  504120 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0812 12:49:13.587680  504120 command_runner.go:130] > #
	I0812 12:49:13.587689  504120 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0812 12:49:13.587710  504120 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0812 12:49:13.587723  504120 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0812 12:49:13.587734  504120 command_runner.go:130] > #      only add mounts it finds in this file.
	I0812 12:49:13.587742  504120 command_runner.go:130] > #
	I0812 12:49:13.587750  504120 command_runner.go:130] > # default_mounts_file = ""
	I0812 12:49:13.587761  504120 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0812 12:49:13.587773  504120 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0812 12:49:13.587782  504120 command_runner.go:130] > pids_limit = 1024
	I0812 12:49:13.587794  504120 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0812 12:49:13.587806  504120 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0812 12:49:13.587819  504120 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0812 12:49:13.587835  504120 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0812 12:49:13.587845  504120 command_runner.go:130] > # log_size_max = -1
	I0812 12:49:13.587858  504120 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0812 12:49:13.587868  504120 command_runner.go:130] > # log_to_journald = false
	I0812 12:49:13.587885  504120 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0812 12:49:13.587896  504120 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0812 12:49:13.587908  504120 command_runner.go:130] > # Path to directory for container attach sockets.
	I0812 12:49:13.587918  504120 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0812 12:49:13.587927  504120 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0812 12:49:13.587937  504120 command_runner.go:130] > # bind_mount_prefix = ""
	I0812 12:49:13.587949  504120 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0812 12:49:13.587957  504120 command_runner.go:130] > # read_only = false
	I0812 12:49:13.587968  504120 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0812 12:49:13.587980  504120 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0812 12:49:13.587990  504120 command_runner.go:130] > # live configuration reload.
	I0812 12:49:13.587998  504120 command_runner.go:130] > # log_level = "info"
	I0812 12:49:13.588009  504120 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0812 12:49:13.588021  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.588029  504120 command_runner.go:130] > # log_filter = ""
	I0812 12:49:13.588042  504120 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0812 12:49:13.588058  504120 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0812 12:49:13.588067  504120 command_runner.go:130] > # separated by comma.
	I0812 12:49:13.588082  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588091  504120 command_runner.go:130] > # uid_mappings = ""
	I0812 12:49:13.588104  504120 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0812 12:49:13.588124  504120 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0812 12:49:13.588134  504120 command_runner.go:130] > # separated by comma.
	I0812 12:49:13.588147  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588157  504120 command_runner.go:130] > # gid_mappings = ""
	I0812 12:49:13.588167  504120 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0812 12:49:13.588180  504120 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 12:49:13.588192  504120 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 12:49:13.588207  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588218  504120 command_runner.go:130] > # minimum_mappable_uid = -1
	I0812 12:49:13.588231  504120 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0812 12:49:13.588244  504120 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0812 12:49:13.588254  504120 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0812 12:49:13.588270  504120 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0812 12:49:13.588280  504120 command_runner.go:130] > # minimum_mappable_gid = -1
	I0812 12:49:13.588294  504120 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0812 12:49:13.588307  504120 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0812 12:49:13.588319  504120 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0812 12:49:13.588327  504120 command_runner.go:130] > # ctr_stop_timeout = 30
	I0812 12:49:13.588336  504120 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0812 12:49:13.588349  504120 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0812 12:49:13.588359  504120 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0812 12:49:13.588374  504120 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0812 12:49:13.588389  504120 command_runner.go:130] > drop_infra_ctr = false
	I0812 12:49:13.588402  504120 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0812 12:49:13.588414  504120 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0812 12:49:13.588429  504120 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0812 12:49:13.588438  504120 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0812 12:49:13.588452  504120 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0812 12:49:13.588464  504120 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0812 12:49:13.588477  504120 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0812 12:49:13.588489  504120 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0812 12:49:13.588498  504120 command_runner.go:130] > # shared_cpuset = ""
	I0812 12:49:13.588510  504120 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0812 12:49:13.588520  504120 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0812 12:49:13.588530  504120 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0812 12:49:13.588549  504120 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0812 12:49:13.588566  504120 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0812 12:49:13.588579  504120 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0812 12:49:13.588590  504120 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0812 12:49:13.588600  504120 command_runner.go:130] > # enable_criu_support = false
	I0812 12:49:13.588613  504120 command_runner.go:130] > # Enable/disable the generation of the container,
	I0812 12:49:13.588626  504120 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0812 12:49:13.588636  504120 command_runner.go:130] > # enable_pod_events = false
	I0812 12:49:13.588648  504120 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 12:49:13.588661  504120 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0812 12:49:13.588673  504120 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0812 12:49:13.588682  504120 command_runner.go:130] > # default_runtime = "runc"
	I0812 12:49:13.588692  504120 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0812 12:49:13.588707  504120 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0812 12:49:13.588724  504120 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0812 12:49:13.588735  504120 command_runner.go:130] > # creation as a file is not desired either.
	I0812 12:49:13.588751  504120 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0812 12:49:13.588761  504120 command_runner.go:130] > # the hostname is being managed dynamically.
	I0812 12:49:13.588771  504120 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0812 12:49:13.588778  504120 command_runner.go:130] > # ]
	I0812 12:49:13.588789  504120 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0812 12:49:13.588801  504120 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0812 12:49:13.588812  504120 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0812 12:49:13.588823  504120 command_runner.go:130] > # Each entry in the table should follow the format:
	I0812 12:49:13.588831  504120 command_runner.go:130] > #
	I0812 12:49:13.588839  504120 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0812 12:49:13.588850  504120 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0812 12:49:13.588920  504120 command_runner.go:130] > # runtime_type = "oci"
	I0812 12:49:13.588931  504120 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0812 12:49:13.588938  504120 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0812 12:49:13.588945  504120 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0812 12:49:13.588953  504120 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0812 12:49:13.588963  504120 command_runner.go:130] > # monitor_env = []
	I0812 12:49:13.588971  504120 command_runner.go:130] > # privileged_without_host_devices = false
	I0812 12:49:13.588982  504120 command_runner.go:130] > # allowed_annotations = []
	I0812 12:49:13.588992  504120 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0812 12:49:13.589000  504120 command_runner.go:130] > # Where:
	I0812 12:49:13.589023  504120 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0812 12:49:13.589036  504120 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0812 12:49:13.589047  504120 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0812 12:49:13.589060  504120 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0812 12:49:13.589070  504120 command_runner.go:130] > #   in $PATH.
	I0812 12:49:13.589100  504120 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0812 12:49:13.589111  504120 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0812 12:49:13.589121  504120 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0812 12:49:13.589130  504120 command_runner.go:130] > #   state.
	I0812 12:49:13.589141  504120 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0812 12:49:13.589153  504120 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0812 12:49:13.589167  504120 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0812 12:49:13.589179  504120 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0812 12:49:13.589192  504120 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0812 12:49:13.589206  504120 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0812 12:49:13.589217  504120 command_runner.go:130] > #   The currently recognized values are:
	I0812 12:49:13.589229  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0812 12:49:13.589244  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0812 12:49:13.589256  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0812 12:49:13.589270  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0812 12:49:13.589284  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0812 12:49:13.589297  504120 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0812 12:49:13.589309  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0812 12:49:13.589329  504120 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0812 12:49:13.589342  504120 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0812 12:49:13.589355  504120 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0812 12:49:13.589365  504120 command_runner.go:130] > #   deprecated option "conmon".
	I0812 12:49:13.589379  504120 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0812 12:49:13.589388  504120 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0812 12:49:13.589400  504120 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0812 12:49:13.589411  504120 command_runner.go:130] > #   should be moved to the container's cgroup
	I0812 12:49:13.589422  504120 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0812 12:49:13.589433  504120 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0812 12:49:13.589445  504120 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0812 12:49:13.589456  504120 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0812 12:49:13.589464  504120 command_runner.go:130] > #
	I0812 12:49:13.589480  504120 command_runner.go:130] > # Using the seccomp notifier feature:
	I0812 12:49:13.589488  504120 command_runner.go:130] > #
	I0812 12:49:13.589498  504120 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0812 12:49:13.589511  504120 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0812 12:49:13.589519  504120 command_runner.go:130] > #
	I0812 12:49:13.589530  504120 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0812 12:49:13.589542  504120 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0812 12:49:13.589554  504120 command_runner.go:130] > #
	I0812 12:49:13.589565  504120 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0812 12:49:13.589573  504120 command_runner.go:130] > # feature.
	I0812 12:49:13.589579  504120 command_runner.go:130] > #
	I0812 12:49:13.589592  504120 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0812 12:49:13.589605  504120 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0812 12:49:13.589618  504120 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0812 12:49:13.589631  504120 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0812 12:49:13.589643  504120 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0812 12:49:13.589651  504120 command_runner.go:130] > #
	I0812 12:49:13.589662  504120 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0812 12:49:13.589674  504120 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0812 12:49:13.589682  504120 command_runner.go:130] > #
	I0812 12:49:13.589697  504120 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0812 12:49:13.589710  504120 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0812 12:49:13.589717  504120 command_runner.go:130] > #
	I0812 12:49:13.589728  504120 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0812 12:49:13.589741  504120 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0812 12:49:13.589747  504120 command_runner.go:130] > # limitation.
	I0812 12:49:13.589785  504120 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0812 12:49:13.589806  504120 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0812 12:49:13.589815  504120 command_runner.go:130] > runtime_type = "oci"
	I0812 12:49:13.589824  504120 command_runner.go:130] > runtime_root = "/run/runc"
	I0812 12:49:13.589832  504120 command_runner.go:130] > runtime_config_path = ""
	I0812 12:49:13.589843  504120 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0812 12:49:13.589852  504120 command_runner.go:130] > monitor_cgroup = "pod"
	I0812 12:49:13.589860  504120 command_runner.go:130] > monitor_exec_cgroup = ""
	I0812 12:49:13.589867  504120 command_runner.go:130] > monitor_env = [
	I0812 12:49:13.589878  504120 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0812 12:49:13.589895  504120 command_runner.go:130] > ]
	I0812 12:49:13.589907  504120 command_runner.go:130] > privileged_without_host_devices = false
	I0812 12:49:13.589920  504120 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0812 12:49:13.589932  504120 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0812 12:49:13.589945  504120 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0812 12:49:13.589961  504120 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0812 12:49:13.589976  504120 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0812 12:49:13.589989  504120 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0812 12:49:13.590007  504120 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0812 12:49:13.590021  504120 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0812 12:49:13.590028  504120 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0812 12:49:13.590037  504120 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0812 12:49:13.590043  504120 command_runner.go:130] > # Example:
	I0812 12:49:13.590051  504120 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0812 12:49:13.590058  504120 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0812 12:49:13.590065  504120 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0812 12:49:13.590073  504120 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0812 12:49:13.590080  504120 command_runner.go:130] > # cpuset = 0
	I0812 12:49:13.590086  504120 command_runner.go:130] > # cpushares = "0-1"
	I0812 12:49:13.590092  504120 command_runner.go:130] > # Where:
	I0812 12:49:13.590100  504120 command_runner.go:130] > # The workload name is workload-type.
	I0812 12:49:13.590113  504120 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0812 12:49:13.590122  504120 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0812 12:49:13.590131  504120 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0812 12:49:13.590143  504120 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0812 12:49:13.590152  504120 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0812 12:49:13.590160  504120 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0812 12:49:13.590170  504120 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0812 12:49:13.590178  504120 command_runner.go:130] > # Default value is set to true
	I0812 12:49:13.590185  504120 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0812 12:49:13.590195  504120 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0812 12:49:13.590202  504120 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0812 12:49:13.590209  504120 command_runner.go:130] > # Default value is set to 'false'
	I0812 12:49:13.590216  504120 command_runner.go:130] > # disable_hostport_mapping = false
	I0812 12:49:13.590226  504120 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0812 12:49:13.590231  504120 command_runner.go:130] > #
	I0812 12:49:13.590247  504120 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0812 12:49:13.590260  504120 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0812 12:49:13.590273  504120 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0812 12:49:13.590284  504120 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0812 12:49:13.590296  504120 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0812 12:49:13.590306  504120 command_runner.go:130] > [crio.image]
	I0812 12:49:13.590316  504120 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0812 12:49:13.590326  504120 command_runner.go:130] > # default_transport = "docker://"
	I0812 12:49:13.590338  504120 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0812 12:49:13.590352  504120 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0812 12:49:13.590361  504120 command_runner.go:130] > # global_auth_file = ""
	I0812 12:49:13.590370  504120 command_runner.go:130] > # The image used to instantiate infra containers.
	I0812 12:49:13.590381  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.590390  504120 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0812 12:49:13.590404  504120 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0812 12:49:13.590416  504120 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0812 12:49:13.590427  504120 command_runner.go:130] > # This option supports live configuration reload.
	I0812 12:49:13.590438  504120 command_runner.go:130] > # pause_image_auth_file = ""
	I0812 12:49:13.590449  504120 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0812 12:49:13.590459  504120 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0812 12:49:13.590472  504120 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0812 12:49:13.590484  504120 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0812 12:49:13.590495  504120 command_runner.go:130] > # pause_command = "/pause"
	I0812 12:49:13.590506  504120 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0812 12:49:13.590519  504120 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0812 12:49:13.590541  504120 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0812 12:49:13.590562  504120 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0812 12:49:13.590575  504120 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0812 12:49:13.590588  504120 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0812 12:49:13.590598  504120 command_runner.go:130] > # pinned_images = [
	I0812 12:49:13.590605  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590617  504120 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0812 12:49:13.590628  504120 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0812 12:49:13.590642  504120 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0812 12:49:13.590655  504120 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0812 12:49:13.590666  504120 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0812 12:49:13.590683  504120 command_runner.go:130] > # signature_policy = ""
	I0812 12:49:13.590696  504120 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0812 12:49:13.590710  504120 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0812 12:49:13.590723  504120 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0812 12:49:13.590737  504120 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0812 12:49:13.590749  504120 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0812 12:49:13.590761  504120 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0812 12:49:13.590774  504120 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0812 12:49:13.590787  504120 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0812 12:49:13.590795  504120 command_runner.go:130] > # changing them here.
	I0812 12:49:13.590803  504120 command_runner.go:130] > # insecure_registries = [
	I0812 12:49:13.590811  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590822  504120 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0812 12:49:13.590833  504120 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0812 12:49:13.590843  504120 command_runner.go:130] > # image_volumes = "mkdir"
	I0812 12:49:13.590853  504120 command_runner.go:130] > # Temporary directory to use for storing big files
	I0812 12:49:13.590861  504120 command_runner.go:130] > # big_files_temporary_dir = ""
	I0812 12:49:13.590874  504120 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0812 12:49:13.590883  504120 command_runner.go:130] > # CNI plugins.
	I0812 12:49:13.590890  504120 command_runner.go:130] > [crio.network]
	I0812 12:49:13.590903  504120 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0812 12:49:13.590915  504120 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0812 12:49:13.590926  504120 command_runner.go:130] > # cni_default_network = ""
	I0812 12:49:13.590939  504120 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0812 12:49:13.590948  504120 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0812 12:49:13.590957  504120 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0812 12:49:13.590966  504120 command_runner.go:130] > # plugin_dirs = [
	I0812 12:49:13.590973  504120 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0812 12:49:13.590981  504120 command_runner.go:130] > # ]
	I0812 12:49:13.590991  504120 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0812 12:49:13.591000  504120 command_runner.go:130] > [crio.metrics]
	I0812 12:49:13.591009  504120 command_runner.go:130] > # Globally enable or disable metrics support.
	I0812 12:49:13.591019  504120 command_runner.go:130] > enable_metrics = true
	I0812 12:49:13.591028  504120 command_runner.go:130] > # Specify enabled metrics collectors.
	I0812 12:49:13.591038  504120 command_runner.go:130] > # Per default all metrics are enabled.
	I0812 12:49:13.591048  504120 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0812 12:49:13.591068  504120 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0812 12:49:13.591081  504120 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0812 12:49:13.591091  504120 command_runner.go:130] > # metrics_collectors = [
	I0812 12:49:13.591100  504120 command_runner.go:130] > # 	"operations",
	I0812 12:49:13.591109  504120 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0812 12:49:13.591124  504120 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0812 12:49:13.591135  504120 command_runner.go:130] > # 	"operations_errors",
	I0812 12:49:13.591144  504120 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0812 12:49:13.591152  504120 command_runner.go:130] > # 	"image_pulls_by_name",
	I0812 12:49:13.591163  504120 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0812 12:49:13.591171  504120 command_runner.go:130] > # 	"image_pulls_failures",
	I0812 12:49:13.591179  504120 command_runner.go:130] > # 	"image_pulls_successes",
	I0812 12:49:13.591187  504120 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0812 12:49:13.591195  504120 command_runner.go:130] > # 	"image_layer_reuse",
	I0812 12:49:13.591203  504120 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0812 12:49:13.591213  504120 command_runner.go:130] > # 	"containers_oom_total",
	I0812 12:49:13.591222  504120 command_runner.go:130] > # 	"containers_oom",
	I0812 12:49:13.591229  504120 command_runner.go:130] > # 	"processes_defunct",
	I0812 12:49:13.591236  504120 command_runner.go:130] > # 	"operations_total",
	I0812 12:49:13.591246  504120 command_runner.go:130] > # 	"operations_latency_seconds",
	I0812 12:49:13.591255  504120 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0812 12:49:13.591265  504120 command_runner.go:130] > # 	"operations_errors_total",
	I0812 12:49:13.591273  504120 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0812 12:49:13.591284  504120 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0812 12:49:13.591293  504120 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0812 12:49:13.591300  504120 command_runner.go:130] > # 	"image_pulls_success_total",
	I0812 12:49:13.591308  504120 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0812 12:49:13.591316  504120 command_runner.go:130] > # 	"containers_oom_count_total",
	I0812 12:49:13.591327  504120 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0812 12:49:13.591337  504120 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0812 12:49:13.591343  504120 command_runner.go:130] > # ]
	I0812 12:49:13.591352  504120 command_runner.go:130] > # The port on which the metrics server will listen.
	I0812 12:49:13.591359  504120 command_runner.go:130] > # metrics_port = 9090
	I0812 12:49:13.591371  504120 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0812 12:49:13.591381  504120 command_runner.go:130] > # metrics_socket = ""
	I0812 12:49:13.591391  504120 command_runner.go:130] > # The certificate for the secure metrics server.
	I0812 12:49:13.591410  504120 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0812 12:49:13.591423  504120 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0812 12:49:13.591434  504120 command_runner.go:130] > # certificate on any modification event.
	I0812 12:49:13.591442  504120 command_runner.go:130] > # metrics_cert = ""
	I0812 12:49:13.591453  504120 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0812 12:49:13.591464  504120 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0812 12:49:13.591472  504120 command_runner.go:130] > # metrics_key = ""
	I0812 12:49:13.591484  504120 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0812 12:49:13.591494  504120 command_runner.go:130] > [crio.tracing]
	I0812 12:49:13.591504  504120 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0812 12:49:13.591513  504120 command_runner.go:130] > # enable_tracing = false
	I0812 12:49:13.591523  504120 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0812 12:49:13.591532  504120 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0812 12:49:13.591550  504120 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0812 12:49:13.591561  504120 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0812 12:49:13.591572  504120 command_runner.go:130] > # CRI-O NRI configuration.
	I0812 12:49:13.591580  504120 command_runner.go:130] > [crio.nri]
	I0812 12:49:13.591588  504120 command_runner.go:130] > # Globally enable or disable NRI.
	I0812 12:49:13.591602  504120 command_runner.go:130] > # enable_nri = false
	I0812 12:49:13.591612  504120 command_runner.go:130] > # NRI socket to listen on.
	I0812 12:49:13.591620  504120 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0812 12:49:13.591629  504120 command_runner.go:130] > # NRI plugin directory to use.
	I0812 12:49:13.591637  504120 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0812 12:49:13.591649  504120 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0812 12:49:13.591659  504120 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0812 12:49:13.591669  504120 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0812 12:49:13.591679  504120 command_runner.go:130] > # nri_disable_connections = false
	I0812 12:49:13.591691  504120 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0812 12:49:13.591699  504120 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0812 12:49:13.591707  504120 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0812 12:49:13.591718  504120 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0812 12:49:13.591731  504120 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0812 12:49:13.591740  504120 command_runner.go:130] > [crio.stats]
	I0812 12:49:13.591750  504120 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0812 12:49:13.591762  504120 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0812 12:49:13.591772  504120 command_runner.go:130] > # stats_collection_period = 0
	I0812 12:49:13.591963  504120 cni.go:84] Creating CNI manager for ""
	I0812 12:49:13.591979  504120 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0812 12:49:13.591995  504120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 12:49:13.592026  504120 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-276573 NodeName:multinode-276573 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 12:49:13.592205  504120 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-276573"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 12:49:13.592287  504120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0812 12:49:13.602993  504120 command_runner.go:130] > kubeadm
	I0812 12:49:13.603020  504120 command_runner.go:130] > kubectl
	I0812 12:49:13.603025  504120 command_runner.go:130] > kubelet
	I0812 12:49:13.603045  504120 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 12:49:13.603101  504120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 12:49:13.613036  504120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0812 12:49:13.630443  504120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 12:49:13.647619  504120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0812 12:49:13.665459  504120 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0812 12:49:13.669965  504120 command_runner.go:130] > 192.168.39.187	control-plane.minikube.internal
	I0812 12:49:13.670179  504120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 12:49:13.822660  504120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 12:49:13.838557  504120 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573 for IP: 192.168.39.187
	I0812 12:49:13.838585  504120 certs.go:194] generating shared ca certs ...
	I0812 12:49:13.838609  504120 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 12:49:13.838852  504120 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 12:49:13.838922  504120 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 12:49:13.838935  504120 certs.go:256] generating profile certs ...
	I0812 12:49:13.839058  504120 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/client.key
	I0812 12:49:13.839144  504120 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key.8ffd67ec
	I0812 12:49:13.839198  504120 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key
	I0812 12:49:13.839214  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0812 12:49:13.839235  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0812 12:49:13.839252  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0812 12:49:13.839268  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0812 12:49:13.839282  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0812 12:49:13.839301  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0812 12:49:13.839319  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0812 12:49:13.839335  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0812 12:49:13.839396  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 12:49:13.839441  504120 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 12:49:13.839452  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 12:49:13.839491  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 12:49:13.839536  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 12:49:13.839574  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 12:49:13.839631  504120 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 12:49:13.839688  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> /usr/share/ca-certificates/4703752.pem
	I0812 12:49:13.839709  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:13.839734  504120 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem -> /usr/share/ca-certificates/470375.pem
	I0812 12:49:13.840657  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 12:49:13.868791  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 12:49:13.894809  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 12:49:13.921370  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 12:49:13.948316  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0812 12:49:13.975097  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 12:49:14.002642  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 12:49:14.028118  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/multinode-276573/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0812 12:49:14.053009  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 12:49:14.076832  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 12:49:14.102494  504120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 12:49:14.127311  504120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 12:49:14.144820  504120 ssh_runner.go:195] Run: openssl version
	I0812 12:49:14.150848  504120 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0812 12:49:14.150944  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 12:49:14.161931  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166600  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166653  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.166695  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 12:49:14.172513  504120 command_runner.go:130] > 3ec20f2e
	I0812 12:49:14.172595  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 12:49:14.182703  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 12:49:14.194410  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199814  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199858  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.199906  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 12:49:14.206190  504120 command_runner.go:130] > b5213941
	I0812 12:49:14.206297  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 12:49:14.216949  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 12:49:14.228448  504120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233179  504120 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233219  504120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.233266  504120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 12:49:14.238863  504120 command_runner.go:130] > 51391683
	I0812 12:49:14.238957  504120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 12:49:14.248320  504120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:49:14.252975  504120 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 12:49:14.253008  504120 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0812 12:49:14.253016  504120 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0812 12:49:14.253025  504120 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0812 12:49:14.253036  504120 command_runner.go:130] > Access: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253043  504120 command_runner.go:130] > Modify: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253050  504120 command_runner.go:130] > Change: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253058  504120 command_runner.go:130] >  Birth: 2024-08-12 12:42:14.742188345 +0000
	I0812 12:49:14.253188  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 12:49:14.258934  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.259087  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 12:49:14.264841  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.264904  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 12:49:14.270480  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.270646  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 12:49:14.276205  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.276278  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 12:49:14.282367  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.282555  504120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 12:49:14.288371  504120 command_runner.go:130] > Certificate will not expire
	I0812 12:49:14.288435  504120 kubeadm.go:392] StartCluster: {Name:multinode-276573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-276573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:49:14.288563  504120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 12:49:14.288638  504120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 12:49:14.329243  504120 command_runner.go:130] > 4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809
	I0812 12:49:14.329279  504120 command_runner.go:130] > aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb
	I0812 12:49:14.329287  504120 command_runner.go:130] > fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd
	I0812 12:49:14.329296  504120 command_runner.go:130] > 129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c
	I0812 12:49:14.329304  504120 command_runner.go:130] > af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c
	I0812 12:49:14.329312  504120 command_runner.go:130] > 419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510
	I0812 12:49:14.329320  504120 command_runner.go:130] > e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9
	I0812 12:49:14.329331  504120 command_runner.go:130] > 877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63
	I0812 12:49:14.330630  504120 cri.go:89] found id: "4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809"
	I0812 12:49:14.330644  504120 cri.go:89] found id: "aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb"
	I0812 12:49:14.330647  504120 cri.go:89] found id: "fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd"
	I0812 12:49:14.330650  504120 cri.go:89] found id: "129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c"
	I0812 12:49:14.330652  504120 cri.go:89] found id: "af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c"
	I0812 12:49:14.330655  504120 cri.go:89] found id: "419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510"
	I0812 12:49:14.330658  504120 cri.go:89] found id: "e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9"
	I0812 12:49:14.330661  504120 cri.go:89] found id: "877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63"
	I0812 12:49:14.330663  504120 cri.go:89] found id: ""
	I0812 12:49:14.330710  504120 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.169369331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467204169342260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88b53c0d-8894-4ecd-b2ea-b930508f505c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.169883367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0844cff-d40e-457c-a353-c46dd3f34b39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.170000648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0844cff-d40e-457c-a353-c46dd3f34b39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.170400104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0844cff-d40e-457c-a353-c46dd3f34b39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.211714149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed2c0975-8e75-457d-99d2-516c1a8f4104 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.211787128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed2c0975-8e75-457d-99d2-516c1a8f4104 name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.212996984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=122733eb-80bf-45cb-948e-7a7596392dee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.213492712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467204213464803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=122733eb-80bf-45cb-948e-7a7596392dee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.214166828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=978278e8-6bd4-4498-ad05-de75971aedb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.214245176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=978278e8-6bd4-4498-ad05-de75971aedb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.214630440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=978278e8-6bd4-4498-ad05-de75971aedb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.262768850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a8aadc2-08d5-41df-86bf-82b540e61e0e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.262872704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a8aadc2-08d5-41df-86bf-82b540e61e0e name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.264169696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acf9e313-c609-4ca5-a69e-ff0b7dca4110 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.264628678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467204264607017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acf9e313-c609-4ca5-a69e-ff0b7dca4110 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.265233792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58712219-70ce-405b-ba47-f320386b2fc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.265292230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58712219-70ce-405b-ba47-f320386b2fc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.266024084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58712219-70ce-405b-ba47-f320386b2fc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.313839275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7798452b-56fd-43ec-8982-f70a1f6f631b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.314081710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7798452b-56fd-43ec-8982-f70a1f6f631b name=/runtime.v1.RuntimeService/Version
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.315253377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08d7601a-aea2-427c-b1cb-0f5de1398c96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.315889193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467204315862177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08d7601a-aea2-427c-b1cb-0f5de1398c96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.316887863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7033371-9a0d-4ace-a2b7-e9fdc43bf0e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.317016739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7033371-9a0d-4ace-a2b7-e9fdc43bf0e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 12:53:24 multinode-276573 crio[2883]: time="2024-08-12 12:53:24.317356806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6be38c95e8515ea190c82296b337405e71917ed6265117fd7f16f414b09fde4,PodSandboxId:6ea92517fa40e45837db1566f036936256a5b4d2ef86f14ac4a3cb172bb91966,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723466994446838687,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85,PodSandboxId:396e4d89062054163adb52bc76dcb99afe4c8eff0302eb9b816bc4fc945ee3ac,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723466960764865106,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be,PodSandboxId:9f624d8bd2d16e937103723f976c6d1fe814fa9003210ed5a0c8ffe0bd2f6920,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723466960669323544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4793b358f032580b135410ef6d2e74c25ad53ba5f5b06fe81d1b46f27fc46ffc,PodSandboxId:4fc8add5d7bc4b8917210d46769d897b8e09de07f1989ebd2d0fa16e15f23d0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723466960643673339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},An
notations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1,PodSandboxId:890a7895a53a40a0e1a655b49a76e2ef003d93eb9a7ffd2dc66a716c6b36c322,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723466960582151033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.ku
bernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc,PodSandboxId:0bee4ca84f60686c4b8b4b29196ce89322aa5b8a89085394f90f829898742c04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723466956774328246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9486f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554,PodSandboxId:695095a29617f9a9b35f2499765c6132f9219aa3b6fbfda572648e36d3d01fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723466956767428645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map[string]string{io.kube
rnetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248,PodSandboxId:b5306eef235ef4461deb9bd861fefa35ca6527112183cdbd972e846139a531d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723466956746612610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4,PodSandboxId:0afcb0871f79482c24af256ddc4f697f8ce5f1a87e65336bd14bf97c2c627af2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723466956664731025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed20778843a5d34e2ab316ba95282e4aaefd7c5944bdbea60afc2993fd52682,PodSandboxId:c878191fe38507d22023d2510e66a25e246891a365f394d35c58324c822ff422,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723466631970677359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sww5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fd62a65-9720-4836-992e-94d373a6cd68,},Annotations:map[string]string{io.kubernetes.container.hash: ce44416f,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb,PodSandboxId:5d040b3b690d82459d0a030696352b08c6a7c77fac0b1e5756531de916d3cdf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723466573087550925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x69zs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 336c890e-36b0-41c8-adcb-c8ff7c9a84f6,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcc381,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bdc0932624519621f0d2a01c2117dfd1cc5ba90f42fa00e194a7673cacd5809,PodSandboxId:2fa953bfaab3e813c7a5e4f80612cd6e88cc3d0b09bc7a967d2213e69ed18a5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723466573090908520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 80784c3e-31fe-4aad-8f01-fd00ccdc0333,},Annotations:map[string]string{io.kubernetes.container.hash: 246798d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd,PodSandboxId:26fe210b6c97f76af5b8483efce3f2e58374e73ee0df909e3ba9f353f5401f2d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723466560954511062,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xmzhc,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 214cf688-5730-4864-9796-d8f2f321cda3,},Annotations:map[string]string{io.kubernetes.container.hash: 56b98cd6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c,PodSandboxId:95dda882abfaaea90d2007e0b017f0e379a578140251c9998cc738f3b6b48c6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723466557076633487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bhzlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ccc5f5f-1f74-4813-a584-05f8c760b5e5,},Annotations:map[string]string{io.kubernetes.container.hash: e742ad9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510,PodSandboxId:45b0266e2265f1d3dbc7378d2629f3a2f4cb3bd6d35a6cd6aced197607e613cf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723466537743330340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94
86f4e43e7d2cc8e77f94846de0ea1c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63,PodSandboxId:4b5e66867dfff0222e3c66e4fbaf0d04f21479cca07bceaeb8b2a7fb49f87cf8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723466537725396287,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5ee29649281828806fd54c8bfe633c,},Annotations:
map[string]string{io.kubernetes.container.hash: 2bc23685,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c,PodSandboxId:c14c34d24918520620e00d59062a0307626295f290de20bcd51be0b7438ef68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723466537776238774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12a0e13f606b1eb3104d5b9aa291e2f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9,PodSandboxId:a1c85aa476c4ee4cee293c5c67272746e0aa385c87fca83c9fa5d37b367ad98a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723466537727667719,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-276573,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 479bb22d4ba874cd1f361b04b645d1e6,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7033371-9a0d-4ace-a2b7-e9fdc43bf0e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6be38c95e851       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6ea92517fa40e       busybox-fc5497c4f-9sww5
	f3867db2c4181       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   396e4d8906205       kindnet-xmzhc
	e84014b3790e1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   9f624d8bd2d16       coredns-7db6d8ff4d-x69zs
	4793b358f0325       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   4fc8add5d7bc4       storage-provisioner
	7c4267fc77c67       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   890a7895a53a4       kube-proxy-bhzlc
	a669daee81213       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   0bee4ca84f606       kube-scheduler-multinode-276573
	1a964c2e9317b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   695095a29617f       kube-controller-manager-multinode-276573
	33defdcc7b94e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   b5306eef235ef       etcd-multinode-276573
	1bb5565b5f8ed       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   0afcb0871f794       kube-apiserver-multinode-276573
	7ed20778843a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c878191fe3850       busybox-fc5497c4f-9sww5
	4bdc093262451       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   2fa953bfaab3e       storage-provisioner
	aaf10a04808d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   5d040b3b690d8       coredns-7db6d8ff4d-x69zs
	fdc6683739c7f       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    10 minutes ago      Exited              kindnet-cni               0                   26fe210b6c97f       kindnet-xmzhc
	129aad74969bd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   95dda882abfaa       kube-proxy-bhzlc
	af96c3a99e025       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   c14c34d249185       kube-apiserver-multinode-276573
	419ac7b21b8f7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   45b0266e2265f       kube-scheduler-multinode-276573
	e4af25a66f030       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   a1c85aa476c4e       kube-controller-manager-multinode-276573
	877dafd292234       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   4b5e66867dfff       etcd-multinode-276573
	
	
	==> coredns [aaf10a04808d159a38821a8da8e70905b120faed4c8eb658100392615d6d45eb] <==
	[INFO] 10.244.1.2:48282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515075s
	[INFO] 10.244.1.2:38786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153234s
	[INFO] 10.244.1.2:41294 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094185s
	[INFO] 10.244.1.2:50512 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001439884s
	[INFO] 10.244.1.2:47980 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148447s
	[INFO] 10.244.1.2:46916 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010897s
	[INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101687s
	[INFO] 10.244.0.3:46326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106386s
	[INFO] 10.244.0.3:39133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058126s
	[INFO] 10.244.0.3:43809 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090478s
	[INFO] 10.244.0.3:43710 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046522s
	[INFO] 10.244.1.2:37133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00025321s
	[INFO] 10.244.1.2:44121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008955s
	[INFO] 10.244.1.2:44473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063934s
	[INFO] 10.244.1.2:44808 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006562s
	[INFO] 10.244.0.3:39778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111851s
	[INFO] 10.244.0.3:48316 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175516s
	[INFO] 10.244.0.3:44888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109089s
	[INFO] 10.244.0.3:45339 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071058s
	[INFO] 10.244.1.2:60909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239096s
	[INFO] 10.244.1.2:47228 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093107s
	[INFO] 10.244.1.2:46141 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092147s
	[INFO] 10.244.1.2:34310 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101304s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e84014b3790e18c35e7c5dbb4dd760a16c39f0510edfcba0e12994560e8a04be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51041 - 24476 "HINFO IN 7729721158021257501.2693719872358529416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015310431s
	
	
	==> describe nodes <==
	Name:               multinode-276573
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-276573
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=multinode-276573
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_42_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:42:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-276573
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:53:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 12:49:20 +0000   Mon, 12 Aug 2024 12:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    multinode-276573
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac32fc823814aeba709a3e679b19cf4
	  System UUID:                4ac32fc8-2381-4aeb-a709-a3e679b19cf4
	  Boot ID:                    4e7fe0b1-4961-44d9-a7f5-a38dfc27ced5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sww5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 coredns-7db6d8ff4d-x69zs                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-276573                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-xmzhc                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-276573             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-276573    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-bhzlc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-276573             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-276573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-276573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-276573 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-276573 event: Registered Node multinode-276573 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-276573 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-276573 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-276573 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-276573 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                node-controller  Node multinode-276573 event: Registered Node multinode-276573 in Controller
	
	
	Name:               multinode-276573-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-276573-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=multinode-276573
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_12T12_49_58_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:49:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-276573-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 12:50:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:51:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:51:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:51:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 12 Aug 2024 12:50:29 +0000   Mon, 12 Aug 2024 12:51:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    multinode-276573-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bbfcd02c8104f5a9a0265512559dca8
	  System UUID:                7bbfcd02-c810-4f5a-9a02-65512559dca8
	  Boot ID:                    be877137-ffd4-4a49-9c38-1ae0ada80d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwms8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-z8nqg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-proxy-vvt5d           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-276573-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-276573-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-276573-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-276573-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-276573-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-276573-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-276573-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-276573-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-276573-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.068341] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068718] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.163659] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.143142] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.262224] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.283669] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.713843] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.068052] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.006774] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.092483] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.701978] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.978465] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.180619] kauditd_printk_skb: 56 callbacks suppressed
	[Aug12 12:43] kauditd_printk_skb: 14 callbacks suppressed
	[Aug12 12:49] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.151047] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.176033] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.141807] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.298217] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +0.784673] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +2.063646] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +4.683224] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.058363] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.357162] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[ +20.509706] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [33defdcc7b94edcab6001a21d03c8fdc4e7e478844f46f6908e62636f17fd248] <==
	{"level":"info","ts":"2024-08-12T12:49:17.154169Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-12T12:49:17.154434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 switched to configuration voters=(17951008399345981744)"}
	{"level":"info","ts":"2024-08-12T12:49:17.154505Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","added-peer-id":"f91ecb07db121930","added-peer-peer-urls":["https://192.168.39.187:2380"]}
	{"level":"info","ts":"2024-08-12T12:49:17.15464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:49:17.15468Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:49:17.165424Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T12:49:17.165699Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f91ecb07db121930","initial-advertise-peer-urls":["https://192.168.39.187:2380"],"listen-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.187:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T12:49:17.165792Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T12:49:17.167293Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:49:17.176937Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:49:18.685656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 received MsgPreVoteResp from f91ecb07db121930 at term 2"}
	{"level":"info","ts":"2024-08-12T12:49:18.685908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.685933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 received MsgVoteResp from f91ecb07db121930 at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.686032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f91ecb07db121930 became leader at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.686059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f91ecb07db121930 elected leader f91ecb07db121930 at term 3"}
	{"level":"info","ts":"2024-08-12T12:49:18.692555Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f91ecb07db121930","local-member-attributes":"{Name:multinode-276573 ClientURLs:[https://192.168.39.187:2379]}","request-path":"/0/members/f91ecb07db121930/attributes","cluster-id":"c7f008ff80693278","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T12:49:18.692566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T12:49:18.692805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T12:49:18.692849Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T12:49:18.692612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T12:49:18.694834Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.187:2379"}
	{"level":"info","ts":"2024-08-12T12:49:18.695567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-12T12:50:06.864105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.519199ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382394613610 > lease_revoke:<id:19309146a2c11eb6>","response":"size:29"}
	
	
	==> etcd [877dafd292234ba1a224fa02070c01dae4238a07f360122bf666db9752d62f63] <==
	{"level":"info","ts":"2024-08-12T12:42:18.808746Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7f008ff80693278","local-member-id":"f91ecb07db121930","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:42:18.808859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T12:42:18.808907Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-08-12T12:43:25.146567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.95453ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382287368347 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:193091469c5cb09a>","response":"size:42"}
	{"level":"info","ts":"2024-08-12T12:43:25.146785Z","caller":"traceutil/trace.go:171","msg":"trace[611996480] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:469; }","duration":"113.04454ms","start":"2024-08-12T12:43:25.033726Z","end":"2024-08-12T12:43:25.146771Z","steps":["trace[611996480] 'read index received'  (duration: 8.723607ms)","trace[611996480] 'applied index is now lower than readState.Index'  (duration: 104.320358ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:43:25.147298Z","caller":"traceutil/trace.go:171","msg":"trace[1409015642] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"172.545048ms","start":"2024-08-12T12:43:24.974738Z","end":"2024-08-12T12:43:25.147283Z","steps":["trace[1409015642] 'process raft request'  (duration: 172.000958ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:25.147589Z","caller":"traceutil/trace.go:171","msg":"trace[1403735309] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"238.141131ms","start":"2024-08-12T12:43:24.909438Z","end":"2024-08-12T12:43:25.14758Z","steps":["trace[1403735309] 'process raft request'  (duration: 237.22749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:43:25.148033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.216252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-276573-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-08-12T12:43:25.14808Z","caller":"traceutil/trace.go:171","msg":"trace[791982244] range","detail":"{range_begin:/registry/minions/multinode-276573-m02; range_end:; response_count:1; response_revision:449; }","duration":"114.361358ms","start":"2024-08-12T12:43:25.03371Z","end":"2024-08-12T12:43:25.148071Z","steps":["trace[791982244] 'agreement among raft nodes before linearized reading'  (duration: 114.202949ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:26.071237Z","caller":"traceutil/trace.go:171","msg":"trace[2128249433] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"181.898761ms","start":"2024-08-12T12:43:25.889288Z","end":"2024-08-12T12:43:26.071186Z","steps":["trace[2128249433] 'process raft request'  (duration: 181.714174ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:43:29.618824Z","caller":"traceutil/trace.go:171","msg":"trace[215396676] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"114.356141ms","start":"2024-08-12T12:43:29.504447Z","end":"2024-08-12T12:43:29.618803Z","steps":["trace[215396676] 'process raft request'  (duration: 114.237454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-12T12:44:24.241875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.161215ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1815110382287368811 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:579 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-12T12:44:24.242712Z","caller":"traceutil/trace.go:171","msg":"trace[418331014] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"224.355485ms","start":"2024-08-12T12:44:24.018337Z","end":"2024-08-12T12:44:24.242693Z","steps":["trace[418331014] 'process raft request'  (duration: 224.141983ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:44:24.242915Z","caller":"traceutil/trace.go:171","msg":"trace[1260288387] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"171.727666ms","start":"2024-08-12T12:44:24.071171Z","end":"2024-08-12T12:44:24.242898Z","steps":["trace[1260288387] 'process raft request'  (duration: 171.360274ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-12T12:44:24.242917Z","caller":"traceutil/trace.go:171","msg":"trace[587866683] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"235.676604ms","start":"2024-08-12T12:44:24.007228Z","end":"2024-08-12T12:44:24.242905Z","steps":["trace[587866683] 'process raft request'  (duration: 121.846199ms)","trace[587866683] 'compare'  (duration: 112.079893ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-12T12:47:40.880303Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T12:47:40.880417Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-276573","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	{"level":"warn","ts":"2024-08-12T12:47:40.880514Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.882368Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.957119Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T12:47:40.957273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.187:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T12:47:40.957395Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f91ecb07db121930","current-leader-member-id":"f91ecb07db121930"}
	{"level":"info","ts":"2024-08-12T12:47:40.960089Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:47:40.960246Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.187:2380"}
	{"level":"info","ts":"2024-08-12T12:47:40.960283Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-276573","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.187:2380"],"advertise-client-urls":["https://192.168.39.187:2379"]}
	
	
	==> kernel <==
	 12:53:24 up 11 min,  0 users,  load average: 0.32, 0.26, 0.14
	Linux multinode-276573 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f3867db2c41815cf050a68ec503bf4348946611c6ddd4aa082ff4344144f2a85] <==
	I0812 12:52:21.891520       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:52:31.891905       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:52:31.892130       1 main.go:299] handling current node
	I0812 12:52:31.892172       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:52:31.892192       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:52:41.891534       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:52:41.891717       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:52:41.892163       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:52:41.892268       1 main.go:299] handling current node
	I0812 12:52:51.892762       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:52:51.892822       1 main.go:299] handling current node
	I0812 12:52:51.892845       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:52:51.892851       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:53:01.898924       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:53:01.899013       1 main.go:299] handling current node
	I0812 12:53:01.899027       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:53:01.899033       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:53:11.896375       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:53:11.896497       1 main.go:299] handling current node
	I0812 12:53:11.896527       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:53:11.896546       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:53:21.891332       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:53:21.891367       1 main.go:299] handling current node
	I0812 12:53:21.891381       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:53:21.891386       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fdc6683739c7f84d63363ce344b710245c4c51dcaf21536a9d8022a7fa35dffd] <==
	I0812 12:46:52.087318       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:02.093199       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:02.093256       1 main.go:299] handling current node
	I0812 12:47:02.093273       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:02.093279       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:02.093445       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:02.093470       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:12.092615       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:12.092723       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:12.092877       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:12.092904       1 main.go:299] handling current node
	I0812 12:47:12.092926       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:12.092941       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:22.094062       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:22.094110       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:22.094246       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:22.094271       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:22.094329       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:22.094351       1 main.go:299] handling current node
	I0812 12:47:32.092884       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0812 12:47:32.093254       1 main.go:322] Node multinode-276573-m02 has CIDR [10.244.1.0/24] 
	I0812 12:47:32.093645       1 main.go:295] Handling node with IPs: map[192.168.39.82:{}]
	I0812 12:47:32.093685       1 main.go:322] Node multinode-276573-m03 has CIDR [10.244.3.0/24] 
	I0812 12:47:32.093891       1 main.go:295] Handling node with IPs: map[192.168.39.187:{}]
	I0812 12:47:32.093927       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1bb5565b5f8ede336299fed41cb0d9981c0d460ca3eabeda70b7d831417683c4] <==
	I0812 12:49:19.921391       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0812 12:49:20.003256       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 12:49:20.009503       1 aggregator.go:165] initial CRD sync complete...
	I0812 12:49:20.009564       1 autoregister_controller.go:141] Starting autoregister controller
	I0812 12:49:20.009571       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 12:49:20.046677       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 12:49:20.048391       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 12:49:20.048473       1 policy_source.go:224] refreshing policies
	I0812 12:49:20.085846       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 12:49:20.087524       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 12:49:20.100670       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 12:49:20.101490       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 12:49:20.107053       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0812 12:49:20.107117       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0812 12:49:20.109659       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 12:49:20.117791       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0812 12:49:20.118202       1 cache.go:39] Caches are synced for autoregister controller
	I0812 12:49:20.903345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 12:49:22.064204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 12:49:22.189151       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 12:49:22.211940       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 12:49:22.282860       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 12:49:22.291226       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 12:49:32.366521       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 12:49:32.492328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [af96c3a99e0258ae90ce6214fea2c340d65f36444c9707455baa54c4ccd8564c] <==
	I0812 12:47:40.892310       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0812 12:47:40.892351       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0812 12:47:40.892391       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0812 12:47:40.892441       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0812 12:47:40.892490       1 controller.go:129] Ending legacy_token_tracking_controller
	I0812 12:47:40.892520       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0812 12:47:40.892559       1 establishing_controller.go:87] Shutting down EstablishingController
	I0812 12:47:40.892611       1 naming_controller.go:302] Shutting down NamingConditionController
	I0812 12:47:40.892644       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0812 12:47:40.892679       1 controller.go:167] Shutting down OpenAPI controller
	I0812 12:47:40.892716       1 available_controller.go:439] Shutting down AvailableConditionController
	I0812 12:47:40.892747       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0812 12:47:40.892780       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0812 12:47:40.892806       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0812 12:47:40.892841       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0812 12:47:40.892982       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0812 12:47:40.893378       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 12:47:40.894137       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 12:47:40.897272       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0812 12:47:40.897583       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0812 12:47:40.897679       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0812 12:47:40.897789       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0812 12:47:40.897862       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0812 12:47:40.897890       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:47:40.903306       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	
	==> kube-controller-manager [1a964c2e9317b3e3ec7d9d16ccfd493cba24799883b817b2eb78d61fb8923554] <==
	I0812 12:49:58.593507       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m02" podCIDRs=["10.244.1.0/24"]
	I0812 12:50:00.468286       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.574µs"
	I0812 12:50:00.483150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.044µs"
	I0812 12:50:00.496143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.417µs"
	I0812 12:50:00.540735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.72µs"
	I0812 12:50:00.548713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.132µs"
	I0812 12:50:00.551132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.652µs"
	I0812 12:50:02.906157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.392µs"
	I0812 12:50:18.269534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:18.294237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.072µs"
	I0812 12:50:18.307713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.967µs"
	I0812 12:50:21.878710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.482595ms"
	I0812 12:50:21.879219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.48µs"
	I0812 12:50:36.588285       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:37.754308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:50:37.757536       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:50:37.766302       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:50:57.405878       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:51:02.919381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:51:42.548607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.922833ms"
	I0812 12:51:42.548715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.239µs"
	I0812 12:51:52.347137       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jpdwd"
	I0812 12:51:52.376339       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jpdwd"
	I0812 12:51:52.376544       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pgzl"
	I0812 12:51:52.400506       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2pgzl"
	
	
	==> kube-controller-manager [e4af25a66f030bcfd49bb89f0616b48c829ea78a22ec92b1e00891f5cf25e3a9] <==
	I0812 12:43:25.181697       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m02" podCIDRs=["10.244.1.0/24"]
	I0812 12:43:25.592088       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-276573-m02"
	I0812 12:43:45.300351       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:43:47.794785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.434645ms"
	I0812 12:43:47.810919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.021909ms"
	I0812 12:43:47.811637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.554µs"
	I0812 12:43:47.813361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.616µs"
	I0812 12:43:47.819499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.195µs"
	I0812 12:43:52.046423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.930069ms"
	I0812 12:43:52.047357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.938µs"
	I0812 12:43:52.515246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.127666ms"
	I0812 12:43:52.515907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.208µs"
	I0812 12:44:24.247313       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:44:24.250124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:44:24.261290       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.2.0/24"]
	I0812 12:44:25.612377       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-276573-m03"
	I0812 12:44:45.241191       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:14.017298       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:15.147392       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:45:15.147590       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-276573-m03\" does not exist"
	I0812 12:45:15.164677       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-276573-m03" podCIDRs=["10.244.3.0/24"]
	I0812 12:45:35.279848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:46:15.668189       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-276573-m02"
	I0812 12:46:20.762902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.711819ms"
	I0812 12:46:20.763031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.94µs"
	
	
	==> kube-proxy [129aad74969bdb07ed1f46eb808b438a5cb27673f663ff46551769f6f8c6ae0c] <==
	I0812 12:42:37.626113       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:42:37.662339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0812 12:42:37.707404       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:42:37.707466       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:42:37.707483       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:42:37.711115       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:42:37.711603       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:42:37.711722       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:42:37.713712       1 config.go:192] "Starting service config controller"
	I0812 12:42:37.713757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:42:37.713784       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:42:37.713805       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:42:37.715055       1 config.go:319] "Starting node config controller"
	I0812 12:42:37.715083       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:42:37.814748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:42:37.814796       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:42:37.815172       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7c4267fc77c678fc47559aa07243fafaa744e248d52d9d40cb0e76cfb4e3c1b1] <==
	I0812 12:49:20.878273       1 server_linux.go:69] "Using iptables proxy"
	I0812 12:49:20.902769       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0812 12:49:20.983619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0812 12:49:20.986374       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 12:49:20.986436       1 server_linux.go:165] "Using iptables Proxier"
	I0812 12:49:20.993764       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0812 12:49:20.994201       1 server.go:872] "Version info" version="v1.30.3"
	I0812 12:49:20.994449       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:49:20.995759       1 config.go:192] "Starting service config controller"
	I0812 12:49:20.995910       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 12:49:20.996063       1 config.go:101] "Starting endpoint slice config controller"
	I0812 12:49:20.996144       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 12:49:20.996705       1 config.go:319] "Starting node config controller"
	I0812 12:49:20.996768       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 12:49:21.096539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 12:49:21.096634       1 shared_informer.go:320] Caches are synced for service config
	I0812 12:49:21.097097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [419ac7b21b8f72871354f34acd3a721867a8c6c2e52616f8b73ee79d24132510] <==
	W0812 12:42:21.115214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 12:42:21.115258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0812 12:42:21.174423       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 12:42:21.174470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0812 12:42:21.214892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 12:42:21.214941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0812 12:42:21.297336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 12:42:21.297487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0812 12:42:21.328857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0812 12:42:21.328902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0812 12:42:21.333826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 12:42:21.333908       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0812 12:42:21.436904       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 12:42:21.437072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0812 12:42:21.526348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 12:42:21.526474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0812 12:42:21.557640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 12:42:21.557744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0812 12:42:21.773292       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 12:42:21.773342       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 12:42:24.321082       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 12:47:40.880535       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0812 12:47:40.880672       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0812 12:47:40.881197       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0812 12:47:40.881801       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a669daee8121335dd4b73f279e3fa653404ac05a54c7e2a60c180661b47b59cc] <==
	I0812 12:49:17.790751       1 serving.go:380] Generated self-signed cert in-memory
	W0812 12:49:19.951780       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 12:49:19.951907       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 12:49:19.951937       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 12:49:19.952019       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 12:49:19.992858       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0812 12:49:19.995120       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 12:49:19.996921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 12:49:19.998422       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 12:49:19.999012       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 12:49:19.998605       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 12:49:20.099131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.139650    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/214cf688-5730-4864-9796-d8f2f321cda3-xtables-lock\") pod \"kindnet-xmzhc\" (UID: \"214cf688-5730-4864-9796-d8f2f321cda3\") " pod="kube-system/kindnet-xmzhc"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.143847    3099 kubelet_node_status.go:112] "Node was previously registered" node="multinode-276573"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.143924    3099 kubelet_node_status.go:76] "Successfully registered node" node="multinode-276573"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.145552    3099 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 12 12:49:20 multinode-276573 kubelet[3099]: I0812 12:49:20.146587    3099 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 12 12:50:16 multinode-276573 kubelet[3099]: E0812 12:50:16.091538    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:50:16 multinode-276573 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:51:16 multinode-276573 kubelet[3099]: E0812 12:51:16.098287    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:51:16 multinode-276573 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:51:16 multinode-276573 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:51:16 multinode-276573 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:51:16 multinode-276573 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:52:16 multinode-276573 kubelet[3099]: E0812 12:52:16.093904    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:52:16 multinode-276573 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:52:16 multinode-276573 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:52:16 multinode-276573 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:52:16 multinode-276573 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 12 12:53:16 multinode-276573 kubelet[3099]: E0812 12:53:16.094002    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 12 12:53:16 multinode-276573 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 12 12:53:16 multinode-276573 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 12 12:53:16 multinode-276573 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 12 12:53:16 multinode-276573 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0812 12:53:23.883202  506038 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19411-463103/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-276573 -n multinode-276573
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-276573 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                    
x
+
TestPreload (270.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-990043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-990043 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m5.311234237s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-990043 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-990043 image pull gcr.io/k8s-minikube/busybox: (3.013050641s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-990043
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-990043: (7.299033409s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-990043 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0812 13:00:44.615721  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-990043 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.065179751s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-990043 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-12 13:01:42.063230717 +0000 UTC m=+5813.004602678
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-990043 -n test-preload-990043
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-990043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-990043 logs -n 25: (1.11530248s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573 sudo cat                                       | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt                       | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m02:/home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n                                                                 | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | multinode-276573-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-276573 ssh -n multinode-276573-m02 sudo cat                                   | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	|         | /home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-276573 node stop m03                                                          | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:44 UTC |
	| node    | multinode-276573 node start                                                             | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:44 UTC | 12 Aug 24 12:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| stop    | -p multinode-276573                                                                     | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:45 UTC |                     |
	| start   | -p multinode-276573                                                                     | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:47 UTC | 12 Aug 24 12:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:50 UTC |                     |
	| node    | multinode-276573 node delete                                                            | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:51 UTC | 12 Aug 24 12:51 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-276573 stop                                                                   | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:51 UTC |                     |
	| start   | -p multinode-276573                                                                     | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:53 UTC | 12 Aug 24 12:56 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-276573                                                                | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:56 UTC |                     |
	| start   | -p multinode-276573-m02                                                                 | multinode-276573-m02 | jenkins | v1.33.1 | 12 Aug 24 12:56 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-276573-m03                                                                 | multinode-276573-m03 | jenkins | v1.33.1 | 12 Aug 24 12:56 UTC | 12 Aug 24 12:57 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-276573                                                                 | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:57 UTC |                     |
	| delete  | -p multinode-276573-m03                                                                 | multinode-276573-m03 | jenkins | v1.33.1 | 12 Aug 24 12:57 UTC | 12 Aug 24 12:57 UTC |
	| delete  | -p multinode-276573                                                                     | multinode-276573     | jenkins | v1.33.1 | 12 Aug 24 12:57 UTC | 12 Aug 24 12:57 UTC |
	| start   | -p test-preload-990043                                                                  | test-preload-990043  | jenkins | v1.33.1 | 12 Aug 24 12:57 UTC | 12 Aug 24 13:00 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-990043 image pull                                                          | test-preload-990043  | jenkins | v1.33.1 | 12 Aug 24 13:00 UTC | 12 Aug 24 13:00 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-990043                                                                  | test-preload-990043  | jenkins | v1.33.1 | 12 Aug 24 13:00 UTC | 12 Aug 24 13:00 UTC |
	| start   | -p test-preload-990043                                                                  | test-preload-990043  | jenkins | v1.33.1 | 12 Aug 24 13:00 UTC | 12 Aug 24 13:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-990043 image list                                                          | test-preload-990043  | jenkins | v1.33.1 | 12 Aug 24 13:01 UTC | 12 Aug 24 13:01 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 13:00:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 13:00:29.815329  508736 out.go:291] Setting OutFile to fd 1 ...
	I0812 13:00:29.815608  508736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:00:29.815618  508736 out.go:304] Setting ErrFile to fd 2...
	I0812 13:00:29.815623  508736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:00:29.815786  508736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 13:00:29.816342  508736 out.go:298] Setting JSON to false
	I0812 13:00:29.817370  508736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16961,"bootTime":1723450669,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 13:00:29.817431  508736 start.go:139] virtualization: kvm guest
	I0812 13:00:29.819537  508736 out.go:177] * [test-preload-990043] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 13:00:29.821408  508736 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 13:00:29.821477  508736 notify.go:220] Checking for updates...
	I0812 13:00:29.823897  508736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 13:00:29.825220  508736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:00:29.826458  508736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 13:00:29.827685  508736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 13:00:29.828885  508736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 13:00:29.830540  508736 config.go:182] Loaded profile config "test-preload-990043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0812 13:00:29.830978  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:00:29.831041  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:00:29.845932  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0812 13:00:29.846349  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:00:29.846872  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:00:29.846905  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:00:29.847256  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:00:29.847423  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:00:29.849047  508736 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0812 13:00:29.850222  508736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 13:00:29.850518  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:00:29.850554  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:00:29.864949  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0812 13:00:29.865405  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:00:29.865874  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:00:29.865903  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:00:29.866262  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:00:29.866438  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:00:29.902665  508736 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 13:00:29.903848  508736 start.go:297] selected driver: kvm2
	I0812 13:00:29.903859  508736 start.go:901] validating driver "kvm2" against &{Name:test-preload-990043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-990043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:00:29.903950  508736 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 13:00:29.904710  508736 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:00:29.904802  508736 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 13:00:29.920078  508736 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 13:00:29.920404  508736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 13:00:29.920489  508736 cni.go:84] Creating CNI manager for ""
	I0812 13:00:29.920503  508736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:00:29.920585  508736 start.go:340] cluster config:
	{Name:test-preload-990043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-990043 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:00:29.920695  508736 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:00:29.923163  508736 out.go:177] * Starting "test-preload-990043" primary control-plane node in "test-preload-990043" cluster
	I0812 13:00:29.924336  508736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0812 13:00:30.478150  508736 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0812 13:00:30.478214  508736 cache.go:56] Caching tarball of preloaded images
	I0812 13:00:30.478427  508736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0812 13:00:30.480122  508736 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0812 13:00:30.481262  508736 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0812 13:00:30.595759  508736 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0812 13:00:43.292193  508736 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0812 13:00:43.292307  508736 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0812 13:00:44.164495  508736 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0812 13:00:44.164632  508736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/config.json ...
	I0812 13:00:44.164912  508736 start.go:360] acquireMachinesLock for test-preload-990043: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 13:00:44.164983  508736 start.go:364] duration metric: took 47.762µs to acquireMachinesLock for "test-preload-990043"
	I0812 13:00:44.164997  508736 start.go:96] Skipping create...Using existing machine configuration
	I0812 13:00:44.165004  508736 fix.go:54] fixHost starting: 
	I0812 13:00:44.165378  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:00:44.165419  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:00:44.181248  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
	I0812 13:00:44.181766  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:00:44.182350  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:00:44.182377  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:00:44.182717  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:00:44.182912  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:00:44.183050  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetState
	I0812 13:00:44.184694  508736 fix.go:112] recreateIfNeeded on test-preload-990043: state=Stopped err=<nil>
	I0812 13:00:44.184735  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	W0812 13:00:44.184921  508736 fix.go:138] unexpected machine state, will restart: <nil>
	I0812 13:00:44.187036  508736 out.go:177] * Restarting existing kvm2 VM for "test-preload-990043" ...
	I0812 13:00:44.188335  508736 main.go:141] libmachine: (test-preload-990043) Calling .Start
	I0812 13:00:44.188518  508736 main.go:141] libmachine: (test-preload-990043) Ensuring networks are active...
	I0812 13:00:44.189282  508736 main.go:141] libmachine: (test-preload-990043) Ensuring network default is active
	I0812 13:00:44.189731  508736 main.go:141] libmachine: (test-preload-990043) Ensuring network mk-test-preload-990043 is active
	I0812 13:00:44.190184  508736 main.go:141] libmachine: (test-preload-990043) Getting domain xml...
	I0812 13:00:44.191012  508736 main.go:141] libmachine: (test-preload-990043) Creating domain...
	I0812 13:00:45.419560  508736 main.go:141] libmachine: (test-preload-990043) Waiting to get IP...
	I0812 13:00:45.420466  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:45.420868  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:45.420960  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:45.420840  508819 retry.go:31] will retry after 293.905388ms: waiting for machine to come up
	I0812 13:00:45.716573  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:45.717294  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:45.717322  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:45.717234  508819 retry.go:31] will retry after 332.722092ms: waiting for machine to come up
	I0812 13:00:46.051895  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:46.052378  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:46.052402  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:46.052337  508819 retry.go:31] will retry after 331.396273ms: waiting for machine to come up
	I0812 13:00:46.384796  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:46.385238  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:46.385264  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:46.385184  508819 retry.go:31] will retry after 484.762497ms: waiting for machine to come up
	I0812 13:00:46.871884  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:46.872280  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:46.872304  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:46.872220  508819 retry.go:31] will retry after 556.381365ms: waiting for machine to come up
	I0812 13:00:47.429922  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:47.430437  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:47.430471  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:47.430369  508819 retry.go:31] will retry after 876.998314ms: waiting for machine to come up
	I0812 13:00:48.308613  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:48.309231  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:48.309261  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:48.309161  508819 retry.go:31] will retry after 1.021769814s: waiting for machine to come up
	I0812 13:00:49.332948  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:49.333333  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:49.333357  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:49.333288  508819 retry.go:31] will retry after 1.079372753s: waiting for machine to come up
	I0812 13:00:50.414688  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:50.415109  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:50.415144  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:50.415041  508819 retry.go:31] will retry after 1.685014037s: waiting for machine to come up
	I0812 13:00:52.102065  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:52.102436  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:52.102466  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:52.102383  508819 retry.go:31] will retry after 1.440578293s: waiting for machine to come up
	I0812 13:00:53.544970  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:53.545451  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:53.545473  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:53.545413  508819 retry.go:31] will retry after 2.404395482s: waiting for machine to come up
	I0812 13:00:55.952626  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:55.953029  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:55.953058  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:55.952963  508819 retry.go:31] will retry after 3.519716268s: waiting for machine to come up
	I0812 13:00:59.473955  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:00:59.474442  508736 main.go:141] libmachine: (test-preload-990043) DBG | unable to find current IP address of domain test-preload-990043 in network mk-test-preload-990043
	I0812 13:00:59.474471  508736 main.go:141] libmachine: (test-preload-990043) DBG | I0812 13:00:59.474409  508819 retry.go:31] will retry after 3.158699159s: waiting for machine to come up
	I0812 13:01:02.636949  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.637493  508736 main.go:141] libmachine: (test-preload-990043) Found IP for machine: 192.168.39.105
	I0812 13:01:02.637522  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has current primary IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.637536  508736 main.go:141] libmachine: (test-preload-990043) Reserving static IP address...
	I0812 13:01:02.637971  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "test-preload-990043", mac: "52:54:00:a4:83:cc", ip: "192.168.39.105"} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.637998  508736 main.go:141] libmachine: (test-preload-990043) Reserved static IP address: 192.168.39.105
	I0812 13:01:02.638017  508736 main.go:141] libmachine: (test-preload-990043) DBG | skip adding static IP to network mk-test-preload-990043 - found existing host DHCP lease matching {name: "test-preload-990043", mac: "52:54:00:a4:83:cc", ip: "192.168.39.105"}
	I0812 13:01:02.638035  508736 main.go:141] libmachine: (test-preload-990043) DBG | Getting to WaitForSSH function...
	I0812 13:01:02.638048  508736 main.go:141] libmachine: (test-preload-990043) Waiting for SSH to be available...
	I0812 13:01:02.640169  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.640529  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.640565  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.640707  508736 main.go:141] libmachine: (test-preload-990043) DBG | Using SSH client type: external
	I0812 13:01:02.640733  508736 main.go:141] libmachine: (test-preload-990043) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa (-rw-------)
	I0812 13:01:02.640790  508736 main.go:141] libmachine: (test-preload-990043) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 13:01:02.640807  508736 main.go:141] libmachine: (test-preload-990043) DBG | About to run SSH command:
	I0812 13:01:02.640839  508736 main.go:141] libmachine: (test-preload-990043) DBG | exit 0
	I0812 13:01:02.765444  508736 main.go:141] libmachine: (test-preload-990043) DBG | SSH cmd err, output: <nil>: 
	I0812 13:01:02.765873  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetConfigRaw
	I0812 13:01:02.766583  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetIP
	I0812 13:01:02.769012  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.769364  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.769392  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.769640  508736 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/config.json ...
	I0812 13:01:02.769891  508736 machine.go:94] provisionDockerMachine start ...
	I0812 13:01:02.769915  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:02.770144  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:02.772576  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.773044  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.773072  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.773213  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:02.773378  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:02.773517  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:02.773618  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:02.773774  508736 main.go:141] libmachine: Using SSH client type: native
	I0812 13:01:02.774024  508736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0812 13:01:02.774039  508736 main.go:141] libmachine: About to run SSH command:
	hostname
	I0812 13:01:02.873874  508736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0812 13:01:02.873918  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetMachineName
	I0812 13:01:02.874213  508736 buildroot.go:166] provisioning hostname "test-preload-990043"
	I0812 13:01:02.874240  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetMachineName
	I0812 13:01:02.874492  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:02.877316  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.877702  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.877739  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.877896  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:02.878060  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:02.878193  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:02.878315  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:02.878482  508736 main.go:141] libmachine: Using SSH client type: native
	I0812 13:01:02.878687  508736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0812 13:01:02.878700  508736 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-990043 && echo "test-preload-990043" | sudo tee /etc/hostname
	I0812 13:01:02.996252  508736 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-990043
	
	I0812 13:01:02.996288  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:02.999237  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.999614  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:02.999648  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:02.999838  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:03.000060  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.000218  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.000413  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:03.000614  508736 main.go:141] libmachine: Using SSH client type: native
	I0812 13:01:03.000790  508736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0812 13:01:03.000805  508736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-990043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-990043/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-990043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 13:01:03.110441  508736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 13:01:03.110472  508736 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 13:01:03.110498  508736 buildroot.go:174] setting up certificates
	I0812 13:01:03.110519  508736 provision.go:84] configureAuth start
	I0812 13:01:03.110528  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetMachineName
	I0812 13:01:03.110880  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetIP
	I0812 13:01:03.113641  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.114027  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:03.114060  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.114181  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:03.116401  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.116769  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:03.116796  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.116980  508736 provision.go:143] copyHostCerts
	I0812 13:01:03.117067  508736 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 13:01:03.117097  508736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 13:01:03.117173  508736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 13:01:03.117276  508736 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 13:01:03.117285  508736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 13:01:03.117311  508736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 13:01:03.117380  508736 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 13:01:03.117388  508736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 13:01:03.117409  508736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 13:01:03.117466  508736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.test-preload-990043 san=[127.0.0.1 192.168.39.105 localhost minikube test-preload-990043]
	I0812 13:01:03.554534  508736 provision.go:177] copyRemoteCerts
	I0812 13:01:03.554602  508736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 13:01:03.554656  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:03.557540  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.557962  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:03.557995  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.558114  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:03.558323  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.558507  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:03.558701  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:03.639862  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 13:01:03.666063  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 13:01:03.690787  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0812 13:01:03.716166  508736 provision.go:87] duration metric: took 605.629644ms to configureAuth
	I0812 13:01:03.716211  508736 buildroot.go:189] setting minikube options for container-runtime
	I0812 13:01:03.716417  508736 config.go:182] Loaded profile config "test-preload-990043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0812 13:01:03.716522  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:03.719331  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.719762  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:03.719796  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.719952  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:03.720178  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.720316  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.720499  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:03.720689  508736 main.go:141] libmachine: Using SSH client type: native
	I0812 13:01:03.720877  508736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0812 13:01:03.720898  508736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 13:01:03.982337  508736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 13:01:03.982367  508736 machine.go:97] duration metric: took 1.212459518s to provisionDockerMachine
	I0812 13:01:03.982379  508736 start.go:293] postStartSetup for "test-preload-990043" (driver="kvm2")
	I0812 13:01:03.982390  508736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 13:01:03.982407  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:03.982836  508736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 13:01:03.982878  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:03.985777  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.986144  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:03.986183  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:03.986280  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:03.986556  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:03.986728  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:03.986980  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:04.068177  508736 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 13:01:04.072459  508736 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 13:01:04.072492  508736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 13:01:04.072582  508736 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 13:01:04.072680  508736 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 13:01:04.072781  508736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 13:01:04.082508  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 13:01:04.109046  508736 start.go:296] duration metric: took 126.652668ms for postStartSetup
	I0812 13:01:04.109118  508736 fix.go:56] duration metric: took 19.94411253s for fixHost
	I0812 13:01:04.109144  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:04.111820  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.112195  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:04.112228  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.112441  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:04.112694  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:04.112843  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:04.113015  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:04.113153  508736 main.go:141] libmachine: Using SSH client type: native
	I0812 13:01:04.113356  508736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0812 13:01:04.113368  508736 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0812 13:01:04.218203  508736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723467664.191300621
	
	I0812 13:01:04.218232  508736 fix.go:216] guest clock: 1723467664.191300621
	I0812 13:01:04.218240  508736 fix.go:229] Guest: 2024-08-12 13:01:04.191300621 +0000 UTC Remote: 2024-08-12 13:01:04.109124412 +0000 UTC m=+34.331322151 (delta=82.176209ms)
	I0812 13:01:04.218261  508736 fix.go:200] guest clock delta is within tolerance: 82.176209ms
	I0812 13:01:04.218266  508736 start.go:83] releasing machines lock for "test-preload-990043", held for 20.053275757s
	I0812 13:01:04.218284  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:04.218605  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetIP
	I0812 13:01:04.221631  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.222037  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:04.222073  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.222201  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:04.222842  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:04.223097  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:04.223233  508736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 13:01:04.223278  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:04.223373  508736 ssh_runner.go:195] Run: cat /version.json
	I0812 13:01:04.223405  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:04.226427  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.226788  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.226878  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:04.226909  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.227048  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:04.227193  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:04.227216  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:04.227229  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:04.227424  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:04.227441  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:04.227598  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:04.227631  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:04.227741  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:04.227861  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:04.330651  508736 ssh_runner.go:195] Run: systemctl --version
	I0812 13:01:04.336848  508736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 13:01:04.480330  508736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 13:01:04.486539  508736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 13:01:04.486623  508736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 13:01:04.503358  508736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 13:01:04.503388  508736 start.go:495] detecting cgroup driver to use...
	I0812 13:01:04.503481  508736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 13:01:04.520747  508736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 13:01:04.536996  508736 docker.go:217] disabling cri-docker service (if available) ...
	I0812 13:01:04.537062  508736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 13:01:04.552291  508736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 13:01:04.567467  508736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 13:01:04.681332  508736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 13:01:04.844867  508736 docker.go:233] disabling docker service ...
	I0812 13:01:04.844968  508736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 13:01:04.862665  508736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 13:01:04.876009  508736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 13:01:05.008084  508736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 13:01:05.141147  508736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 13:01:05.162951  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 13:01:05.182386  508736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0812 13:01:05.182451  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.193174  508736 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 13:01:05.193245  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.204488  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.215679  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.226793  508736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 13:01:05.237931  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.248783  508736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.267213  508736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:01:05.277722  508736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 13:01:05.287186  508736 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 13:01:05.287286  508736 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 13:01:05.300553  508736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 13:01:05.311208  508736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:01:05.440406  508736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 13:01:05.579688  508736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 13:01:05.579793  508736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 13:01:05.584962  508736 start.go:563] Will wait 60s for crictl version
	I0812 13:01:05.585029  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:05.589442  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 13:01:05.633017  508736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 13:01:05.633139  508736 ssh_runner.go:195] Run: crio --version
	I0812 13:01:05.663877  508736 ssh_runner.go:195] Run: crio --version
	I0812 13:01:05.694814  508736 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0812 13:01:05.696264  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetIP
	I0812 13:01:05.699666  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:05.700159  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:05.700194  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:05.700513  508736 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0812 13:01:05.705039  508736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 13:01:05.719954  508736 kubeadm.go:883] updating cluster {Name:test-preload-990043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-990043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 13:01:05.720083  508736 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0812 13:01:05.720126  508736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 13:01:05.763803  508736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0812 13:01:05.763883  508736 ssh_runner.go:195] Run: which lz4
	I0812 13:01:05.768370  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0812 13:01:05.772989  508736 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 13:01:05.773031  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0812 13:01:07.381762  508736 crio.go:462] duration metric: took 1.613438157s to copy over tarball
	I0812 13:01:07.381839  508736 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 13:01:09.759162  508736 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.377287582s)
	I0812 13:01:09.759198  508736 crio.go:469] duration metric: took 2.377402815s to extract the tarball
	I0812 13:01:09.759209  508736 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 13:01:09.800687  508736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 13:01:09.845683  508736 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0812 13:01:09.845709  508736 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 13:01:09.845761  508736 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:01:09.845848  508736 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:09.845875  508736 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:09.845883  508736 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:09.845907  508736 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0812 13:01:09.845858  508736 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:09.846025  508736 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:09.846262  508736 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:09.847334  508736 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:09.847311  508736 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:09.847320  508736 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:09.847383  508736 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:09.847311  508736 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:09.847311  508736 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0812 13:01:09.847625  508736 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:09.847753  508736 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:01:10.028955  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:10.064356  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0812 13:01:10.069823  508736 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0812 13:01:10.069871  508736 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:10.069921  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.106464  508736 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0812 13:01:10.106517  508736 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0812 13:01:10.106559  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.106568  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:10.143939  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:10.144027  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0812 13:01:10.185064  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:10.191904  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:10.192425  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:10.195388  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:10.198524  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0812 13:01:10.198573  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0812 13:01:10.201387  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:10.327563  508736 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0812 13:01:10.327618  508736 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:10.327689  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.352420  508736 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0812 13:01:10.352474  508736 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:10.352501  508736 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0812 13:01:10.352529  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.352535  508736 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:10.352590  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.365810  508736 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0812 13:01:10.365832  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0812 13:01:10.365860  508736 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:10.365902  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.365905  508736 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0812 13:01:10.365817  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0812 13:01:10.365921  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0812 13:01:10.365935  508736 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:10.365975  508736 ssh_runner.go:195] Run: which crictl
	I0812 13:01:10.366005  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:10.366070  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:10.366090  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:10.457553  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:10.457587  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:10.457620  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0812 13:01:10.457730  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0812 13:01:10.457734  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:10.457815  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:10.457788  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:10.457829  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0812 13:01:10.457845  508736 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0812 13:01:10.457869  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0812 13:01:10.587293  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:10.587348  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0812 13:01:10.776046  508736 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:01:13.219602  508736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.761707399s)
	I0812 13:01:13.219649  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0812 13:01:13.219691  508736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.761851861s)
	I0812 13:01:13.219784  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:13.219866  508736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.762028449s)
	I0812 13:01:13.219949  508736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.762136943s)
	I0812 13:01:13.219989  508736 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.762232593s)
	I0812 13:01:13.220023  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0812 13:01:13.220032  508736 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0812 13:01:13.220063  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0812 13:01:13.220078  508736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.632747804s)
	I0812 13:01:13.219994  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0812 13:01:13.219955  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0812 13:01:13.220190  508736 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.632825639s)
	I0812 13:01:13.220146  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0812 13:01:13.220229  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0812 13:01:13.220305  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0812 13:01:13.220318  508736 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.444242266s)
	I0812 13:01:13.326445  508736 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0812 13:01:13.326515  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0812 13:01:13.326455  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0812 13:01:13.326575  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0812 13:01:13.326612  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0812 13:01:13.326638  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0812 13:01:13.452466  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0812 13:01:13.452472  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0812 13:01:13.452541  508736 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0812 13:01:13.452597  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0812 13:01:13.452613  508736 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0812 13:01:13.452661  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0812 13:01:13.452597  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0812 13:01:13.452688  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0812 13:01:13.452709  508736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0812 13:01:14.217919  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0812 13:01:14.217976  508736 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0812 13:01:14.217981  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0812 13:01:14.218018  508736 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0812 13:01:14.218071  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0812 13:01:16.471535  508736 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.253434418s)
	I0812 13:01:16.471589  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0812 13:01:16.471619  508736 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0812 13:01:16.471673  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0812 13:01:16.918341  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0812 13:01:16.918397  508736 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0812 13:01:16.918446  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0812 13:01:17.369515  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0812 13:01:17.369579  508736 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0812 13:01:17.369638  508736 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0812 13:01:18.113286  508736 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0812 13:01:18.113346  508736 cache_images.go:123] Successfully loaded all cached images
	I0812 13:01:18.113353  508736 cache_images.go:92] duration metric: took 8.267633898s to LoadCachedImages
	I0812 13:01:18.113372  508736 kubeadm.go:934] updating node { 192.168.39.105 8443 v1.24.4 crio true true} ...
	I0812 13:01:18.113529  508736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-990043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-990043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 13:01:18.113607  508736 ssh_runner.go:195] Run: crio config
	I0812 13:01:18.159602  508736 cni.go:84] Creating CNI manager for ""
	I0812 13:01:18.159633  508736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:01:18.159648  508736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 13:01:18.159668  508736 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-990043 NodeName:test-preload-990043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0812 13:01:18.159841  508736 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-990043"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 13:01:18.159924  508736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0812 13:01:18.169778  508736 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 13:01:18.169859  508736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 13:01:18.179218  508736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0812 13:01:18.196433  508736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 13:01:18.214051  508736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0812 13:01:18.232439  508736 ssh_runner.go:195] Run: grep 192.168.39.105	control-plane.minikube.internal$ /etc/hosts
	I0812 13:01:18.236515  508736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 13:01:18.248533  508736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:01:18.362770  508736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 13:01:18.379985  508736 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043 for IP: 192.168.39.105
	I0812 13:01:18.380018  508736 certs.go:194] generating shared ca certs ...
	I0812 13:01:18.380048  508736 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:01:18.380247  508736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 13:01:18.380311  508736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 13:01:18.380326  508736 certs.go:256] generating profile certs ...
	I0812 13:01:18.380443  508736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/client.key
	I0812 13:01:18.380534  508736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/apiserver.key.63ac49d2
	I0812 13:01:18.380590  508736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/proxy-client.key
	I0812 13:01:18.380755  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 13:01:18.380798  508736 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 13:01:18.380811  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 13:01:18.380843  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 13:01:18.380867  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 13:01:18.380899  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 13:01:18.380939  508736 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 13:01:18.381673  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 13:01:18.427671  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 13:01:18.467578  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 13:01:18.494155  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 13:01:18.523222  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0812 13:01:18.554408  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 13:01:18.584173  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 13:01:18.627190  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 13:01:18.653069  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 13:01:18.677072  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 13:01:18.701240  508736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 13:01:18.725962  508736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 13:01:18.743348  508736 ssh_runner.go:195] Run: openssl version
	I0812 13:01:18.749376  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 13:01:18.760593  508736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 13:01:18.765405  508736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 13:01:18.765469  508736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 13:01:18.771208  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 13:01:18.781927  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 13:01:18.792925  508736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:01:18.797395  508736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:01:18.797458  508736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:01:18.803227  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 13:01:18.814538  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 13:01:18.826299  508736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 13:01:18.831403  508736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 13:01:18.831474  508736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 13:01:18.837753  508736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 13:01:18.849684  508736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 13:01:18.854659  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0812 13:01:18.860935  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0812 13:01:18.867053  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0812 13:01:18.873340  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0812 13:01:18.879598  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0812 13:01:18.885770  508736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0812 13:01:18.891746  508736 kubeadm.go:392] StartCluster: {Name:test-preload-990043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-990043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:01:18.891839  508736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 13:01:18.891888  508736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 13:01:18.930433  508736 cri.go:89] found id: ""
	I0812 13:01:18.930512  508736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 13:01:18.941279  508736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0812 13:01:18.941306  508736 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0812 13:01:18.941384  508736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0812 13:01:18.952549  508736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0812 13:01:18.953033  508736 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-990043" does not appear in /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:01:18.953179  508736 kubeconfig.go:62] /home/jenkins/minikube-integration/19411-463103/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-990043" cluster setting kubeconfig missing "test-preload-990043" context setting]
	I0812 13:01:18.953505  508736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:01:18.954211  508736 kapi.go:59] client config for test-preload-990043: &rest.Config{Host:"https://192.168.39.105:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 13:01:18.954888  508736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0812 13:01:18.964997  508736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.105
	I0812 13:01:18.965035  508736 kubeadm.go:1160] stopping kube-system containers ...
	I0812 13:01:18.965052  508736 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0812 13:01:18.965152  508736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 13:01:19.003607  508736 cri.go:89] found id: ""
	I0812 13:01:19.003697  508736 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0812 13:01:19.020618  508736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 13:01:19.031057  508736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 13:01:19.031092  508736 kubeadm.go:157] found existing configuration files:
	
	I0812 13:01:19.031162  508736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 13:01:19.040904  508736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 13:01:19.040976  508736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 13:01:19.051120  508736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 13:01:19.060842  508736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 13:01:19.060905  508736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 13:01:19.071090  508736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 13:01:19.080268  508736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 13:01:19.080324  508736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 13:01:19.089732  508736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 13:01:19.098967  508736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 13:01:19.099031  508736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 13:01:19.110131  508736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 13:01:19.120134  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:19.218386  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:20.061581  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:20.318011  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:20.399349  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:20.477810  508736 api_server.go:52] waiting for apiserver process to appear ...
	I0812 13:01:20.477916  508736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:01:20.978752  508736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:01:21.478077  508736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:01:21.517389  508736 api_server.go:72] duration metric: took 1.039576184s to wait for apiserver process to appear ...
	I0812 13:01:21.517423  508736 api_server.go:88] waiting for apiserver healthz status ...
	I0812 13:01:21.517463  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:21.517972  508736 api_server.go:269] stopped: https://192.168.39.105:8443/healthz: Get "https://192.168.39.105:8443/healthz": dial tcp 192.168.39.105:8443: connect: connection refused
	I0812 13:01:22.018459  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:22.019139  508736 api_server.go:269] stopped: https://192.168.39.105:8443/healthz: Get "https://192.168.39.105:8443/healthz": dial tcp 192.168.39.105:8443: connect: connection refused
	I0812 13:01:22.517734  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:25.341113  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 13:01:25.341152  508736 api_server.go:103] status: https://192.168.39.105:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 13:01:25.341168  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:25.420280  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:01:25.420312  508736 api_server.go:103] status: https://192.168.39.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:01:25.518513  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:25.523603  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:01:25.523640  508736 api_server.go:103] status: https://192.168.39.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:01:26.018431  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:26.024297  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:01:26.024343  508736 api_server.go:103] status: https://192.168.39.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:01:26.517864  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:26.530880  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:01:26.530942  508736 api_server.go:103] status: https://192.168.39.105:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:01:27.018556  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:27.026275  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0812 13:01:27.035704  508736 api_server.go:141] control plane version: v1.24.4
	I0812 13:01:27.035743  508736 api_server.go:131] duration metric: took 5.518313361s to wait for apiserver health ...
	I0812 13:01:27.035754  508736 cni.go:84] Creating CNI manager for ""
	I0812 13:01:27.035760  508736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:01:27.037586  508736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 13:01:27.039089  508736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 13:01:27.051379  508736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 13:01:27.073311  508736 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 13:01:27.086091  508736 system_pods.go:59] 7 kube-system pods found
	I0812 13:01:27.086124  508736 system_pods.go:61] "coredns-6d4b75cb6d-vc452" [58e7b67d-8651-4cb0-b156-41e57c9be638] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0812 13:01:27.086129  508736 system_pods.go:61] "etcd-test-preload-990043" [d57d28bf-3f68-4859-bf1f-cb3d152551c6] Running
	I0812 13:01:27.086136  508736 system_pods.go:61] "kube-apiserver-test-preload-990043" [2077135e-c5c3-42ea-ba5c-403b25547698] Running
	I0812 13:01:27.086143  508736 system_pods.go:61] "kube-controller-manager-test-preload-990043" [0661d877-e302-4712-851c-1f2b7a5550f2] Running
	I0812 13:01:27.086155  508736 system_pods.go:61] "kube-proxy-z49pr" [983c13ee-40b6-4aab-9703-8f1f1c4dca06] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0812 13:01:27.086166  508736 system_pods.go:61] "kube-scheduler-test-preload-990043" [737adaa8-dc6e-4501-ab13-fbe3b2d745a1] Running
	I0812 13:01:27.086170  508736 system_pods.go:61] "storage-provisioner" [42d73f1e-8d3c-482c-a057-8f62dfbe94b3] Running
	I0812 13:01:27.086177  508736 system_pods.go:74] duration metric: took 12.838497ms to wait for pod list to return data ...
	I0812 13:01:27.086187  508736 node_conditions.go:102] verifying NodePressure condition ...
	I0812 13:01:27.090360  508736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 13:01:27.090388  508736 node_conditions.go:123] node cpu capacity is 2
	I0812 13:01:27.090400  508736 node_conditions.go:105] duration metric: took 4.208676ms to run NodePressure ...
	I0812 13:01:27.090423  508736 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:01:27.290298  508736 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0812 13:01:27.296115  508736 retry.go:31] will retry after 128.707357ms: kubelet not initialised
	I0812 13:01:27.431743  508736 retry.go:31] will retry after 384.200004ms: kubelet not initialised
	I0812 13:01:27.822747  508736 retry.go:31] will retry after 798.023948ms: kubelet not initialised
	I0812 13:01:28.627279  508736 retry.go:31] will retry after 1.099340379s: kubelet not initialised
	I0812 13:01:29.733692  508736 retry.go:31] will retry after 1.66953162s: kubelet not initialised
	I0812 13:01:31.410659  508736 kubeadm.go:739] kubelet initialised
	I0812 13:01:31.410690  508736 kubeadm.go:740] duration metric: took 4.120360534s waiting for restarted kubelet to initialise ...
	I0812 13:01:31.410706  508736 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 13:01:31.415899  508736 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:31.421034  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.421061  508736 pod_ready.go:81] duration metric: took 5.135758ms for pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:31.421071  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.421089  508736 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:31.426004  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "etcd-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.426035  508736 pod_ready.go:81] duration metric: took 4.937562ms for pod "etcd-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:31.426047  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "etcd-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.426056  508736 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:31.430355  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "kube-apiserver-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.430379  508736 pod_ready.go:81] duration metric: took 4.310913ms for pod "kube-apiserver-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:31.430387  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "kube-apiserver-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.430393  508736 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:31.434628  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.434650  508736 pod_ready.go:81] duration metric: took 4.247701ms for pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:31.434658  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.434664  508736 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z49pr" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:31.809248  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "kube-proxy-z49pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.809279  508736 pod_ready.go:81] duration metric: took 374.606236ms for pod "kube-proxy-z49pr" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:31.809289  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "kube-proxy-z49pr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:31.809295  508736 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:32.208506  508736 pod_ready.go:97] node "test-preload-990043" hosting pod "kube-scheduler-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:32.208555  508736 pod_ready.go:81] duration metric: took 399.251653ms for pod "kube-scheduler-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	E0812 13:01:32.208568  508736 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-990043" hosting pod "kube-scheduler-test-preload-990043" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:32.208577  508736 pod_ready.go:38] duration metric: took 797.85889ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 13:01:32.208610  508736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 13:01:32.222453  508736 ops.go:34] apiserver oom_adj: -16
	I0812 13:01:32.222482  508736 kubeadm.go:597] duration metric: took 13.281168372s to restartPrimaryControlPlane
	I0812 13:01:32.222494  508736 kubeadm.go:394] duration metric: took 13.330755044s to StartCluster
	I0812 13:01:32.222541  508736 settings.go:142] acquiring lock: {Name:mke9ed38a916e17fe99baccde568c442d70df1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:01:32.222641  508736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:01:32.223473  508736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:01:32.223797  508736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 13:01:32.223916  508736 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 13:01:32.224016  508736 addons.go:69] Setting storage-provisioner=true in profile "test-preload-990043"
	I0812 13:01:32.224020  508736 config.go:182] Loaded profile config "test-preload-990043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0812 13:01:32.224045  508736 addons.go:234] Setting addon storage-provisioner=true in "test-preload-990043"
	W0812 13:01:32.224059  508736 addons.go:243] addon storage-provisioner should already be in state true
	I0812 13:01:32.224056  508736 addons.go:69] Setting default-storageclass=true in profile "test-preload-990043"
	I0812 13:01:32.224088  508736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-990043"
	I0812 13:01:32.224090  508736 host.go:66] Checking if "test-preload-990043" exists ...
	I0812 13:01:32.224441  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:01:32.224482  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:01:32.224581  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:01:32.224628  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:01:32.225581  508736 out.go:177] * Verifying Kubernetes components...
	I0812 13:01:32.226871  508736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:01:32.240074  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0812 13:01:32.240199  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0812 13:01:32.240560  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:01:32.240579  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:01:32.241076  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:01:32.241111  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:01:32.241114  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:01:32.241138  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:01:32.241476  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:01:32.241523  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:01:32.241699  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetState
	I0812 13:01:32.242011  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:01:32.242046  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:01:32.243921  508736 kapi.go:59] client config for test-preload-990043: &rest.Config{Host:"https://192.168.39.105:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/test-preload-990043/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 13:01:32.244186  508736 addons.go:234] Setting addon default-storageclass=true in "test-preload-990043"
	W0812 13:01:32.244200  508736 addons.go:243] addon default-storageclass should already be in state true
	I0812 13:01:32.244222  508736 host.go:66] Checking if "test-preload-990043" exists ...
	I0812 13:01:32.244442  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:01:32.244481  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:01:32.258019  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0812 13:01:32.258588  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:01:32.259101  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:01:32.259126  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:01:32.259444  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:01:32.259599  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0812 13:01:32.259655  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetState
	I0812 13:01:32.259969  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:01:32.260424  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:01:32.260456  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:01:32.260968  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:01:32.261585  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:32.261595  508736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:01:32.261632  508736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:01:32.263551  508736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:01:32.264777  508736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 13:01:32.264794  508736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 13:01:32.264809  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:32.267764  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:32.268216  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:32.268239  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:32.268472  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:32.268671  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:32.268833  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:32.268987  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:32.276853  508736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0812 13:01:32.277259  508736 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:01:32.277732  508736 main.go:141] libmachine: Using API Version  1
	I0812 13:01:32.277759  508736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:01:32.278054  508736 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:01:32.278247  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetState
	I0812 13:01:32.279764  508736 main.go:141] libmachine: (test-preload-990043) Calling .DriverName
	I0812 13:01:32.279963  508736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 13:01:32.279978  508736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 13:01:32.279992  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHHostname
	I0812 13:01:32.282291  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:32.282635  508736 main.go:141] libmachine: (test-preload-990043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:83:cc", ip: ""} in network mk-test-preload-990043: {Iface:virbr1 ExpiryTime:2024-08-12 14:00:55 +0000 UTC Type:0 Mac:52:54:00:a4:83:cc Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:test-preload-990043 Clientid:01:52:54:00:a4:83:cc}
	I0812 13:01:32.282663  508736 main.go:141] libmachine: (test-preload-990043) DBG | domain test-preload-990043 has defined IP address 192.168.39.105 and MAC address 52:54:00:a4:83:cc in network mk-test-preload-990043
	I0812 13:01:32.282796  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHPort
	I0812 13:01:32.282995  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHKeyPath
	I0812 13:01:32.283189  508736 main.go:141] libmachine: (test-preload-990043) Calling .GetSSHUsername
	I0812 13:01:32.283347  508736 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/test-preload-990043/id_rsa Username:docker}
	I0812 13:01:32.402660  508736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 13:01:32.421018  508736 node_ready.go:35] waiting up to 6m0s for node "test-preload-990043" to be "Ready" ...
	I0812 13:01:32.562164  508736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 13:01:32.567776  508736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 13:01:33.610964  508736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048747103s)
	I0812 13:01:33.611036  508736 main.go:141] libmachine: Making call to close driver server
	I0812 13:01:33.611051  508736 main.go:141] libmachine: (test-preload-990043) Calling .Close
	I0812 13:01:33.611047  508736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043234947s)
	I0812 13:01:33.611093  508736 main.go:141] libmachine: Making call to close driver server
	I0812 13:01:33.611110  508736 main.go:141] libmachine: (test-preload-990043) Calling .Close
	I0812 13:01:33.611406  508736 main.go:141] libmachine: (test-preload-990043) DBG | Closing plugin on server side
	I0812 13:01:33.611430  508736 main.go:141] libmachine: (test-preload-990043) DBG | Closing plugin on server side
	I0812 13:01:33.611459  508736 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:01:33.611461  508736 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:01:33.611466  508736 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:01:33.611475  508736 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:01:33.611484  508736 main.go:141] libmachine: Making call to close driver server
	I0812 13:01:33.611492  508736 main.go:141] libmachine: (test-preload-990043) Calling .Close
	I0812 13:01:33.611495  508736 main.go:141] libmachine: Making call to close driver server
	I0812 13:01:33.611503  508736 main.go:141] libmachine: (test-preload-990043) Calling .Close
	I0812 13:01:33.611749  508736 main.go:141] libmachine: (test-preload-990043) DBG | Closing plugin on server side
	I0812 13:01:33.611754  508736 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:01:33.611765  508736 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:01:33.611777  508736 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:01:33.611790  508736 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:01:33.628259  508736 main.go:141] libmachine: Making call to close driver server
	I0812 13:01:33.628289  508736 main.go:141] libmachine: (test-preload-990043) Calling .Close
	I0812 13:01:33.628657  508736 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:01:33.628681  508736 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:01:33.628699  508736 main.go:141] libmachine: (test-preload-990043) DBG | Closing plugin on server side
	I0812 13:01:33.630878  508736 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 13:01:33.632297  508736 addons.go:510] duration metric: took 1.408389981s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 13:01:34.424461  508736 node_ready.go:53] node "test-preload-990043" has status "Ready":"False"
	I0812 13:01:35.928127  508736 node_ready.go:49] node "test-preload-990043" has status "Ready":"True"
	I0812 13:01:35.928155  508736 node_ready.go:38] duration metric: took 3.507101684s for node "test-preload-990043" to be "Ready" ...
	I0812 13:01:35.928163  508736 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 13:01:35.934263  508736 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:35.940944  508736 pod_ready.go:92] pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:35.940969  508736 pod_ready.go:81] duration metric: took 6.672137ms for pod "coredns-6d4b75cb6d-vc452" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:35.940979  508736 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:37.948882  508736 pod_ready.go:102] pod "etcd-test-preload-990043" in "kube-system" namespace has status "Ready":"False"
	I0812 13:01:40.449790  508736 pod_ready.go:102] pod "etcd-test-preload-990043" in "kube-system" namespace has status "Ready":"False"
	I0812 13:01:40.947668  508736 pod_ready.go:92] pod "etcd-test-preload-990043" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:40.947695  508736 pod_ready.go:81] duration metric: took 5.006709549s for pod "etcd-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.947705  508736 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.952233  508736 pod_ready.go:92] pod "kube-apiserver-test-preload-990043" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:40.952261  508736 pod_ready.go:81] duration metric: took 4.548876ms for pod "kube-apiserver-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.952275  508736 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.956227  508736 pod_ready.go:92] pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:40.956253  508736 pod_ready.go:81] duration metric: took 3.969443ms for pod "kube-controller-manager-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.956279  508736 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z49pr" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.960169  508736 pod_ready.go:92] pod "kube-proxy-z49pr" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:40.960195  508736 pod_ready.go:81] duration metric: took 3.907307ms for pod "kube-proxy-z49pr" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.960207  508736 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.964684  508736 pod_ready.go:92] pod "kube-scheduler-test-preload-990043" in "kube-system" namespace has status "Ready":"True"
	I0812 13:01:40.964711  508736 pod_ready.go:81] duration metric: took 4.493129ms for pod "kube-scheduler-test-preload-990043" in "kube-system" namespace to be "Ready" ...
	I0812 13:01:40.964727  508736 pod_ready.go:38] duration metric: took 5.036553804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 13:01:40.964768  508736 api_server.go:52] waiting for apiserver process to appear ...
	I0812 13:01:40.964831  508736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:01:40.979244  508736 api_server.go:72] duration metric: took 8.755410077s to wait for apiserver process to appear ...
	I0812 13:01:40.979271  508736 api_server.go:88] waiting for apiserver healthz status ...
	I0812 13:01:40.979288  508736 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0812 13:01:40.984326  508736 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0812 13:01:40.985162  508736 api_server.go:141] control plane version: v1.24.4
	I0812 13:01:40.985187  508736 api_server.go:131] duration metric: took 5.908676ms to wait for apiserver health ...
	I0812 13:01:40.985197  508736 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 13:01:41.148089  508736 system_pods.go:59] 7 kube-system pods found
	I0812 13:01:41.148122  508736 system_pods.go:61] "coredns-6d4b75cb6d-vc452" [58e7b67d-8651-4cb0-b156-41e57c9be638] Running
	I0812 13:01:41.148128  508736 system_pods.go:61] "etcd-test-preload-990043" [d57d28bf-3f68-4859-bf1f-cb3d152551c6] Running
	I0812 13:01:41.148133  508736 system_pods.go:61] "kube-apiserver-test-preload-990043" [2077135e-c5c3-42ea-ba5c-403b25547698] Running
	I0812 13:01:41.148137  508736 system_pods.go:61] "kube-controller-manager-test-preload-990043" [0661d877-e302-4712-851c-1f2b7a5550f2] Running
	I0812 13:01:41.148142  508736 system_pods.go:61] "kube-proxy-z49pr" [983c13ee-40b6-4aab-9703-8f1f1c4dca06] Running
	I0812 13:01:41.148146  508736 system_pods.go:61] "kube-scheduler-test-preload-990043" [737adaa8-dc6e-4501-ab13-fbe3b2d745a1] Running
	I0812 13:01:41.148150  508736 system_pods.go:61] "storage-provisioner" [42d73f1e-8d3c-482c-a057-8f62dfbe94b3] Running
	I0812 13:01:41.148164  508736 system_pods.go:74] duration metric: took 162.952966ms to wait for pod list to return data ...
	I0812 13:01:41.148173  508736 default_sa.go:34] waiting for default service account to be created ...
	I0812 13:01:41.345451  508736 default_sa.go:45] found service account: "default"
	I0812 13:01:41.345490  508736 default_sa.go:55] duration metric: took 197.304083ms for default service account to be created ...
	I0812 13:01:41.345500  508736 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 13:01:41.548467  508736 system_pods.go:86] 7 kube-system pods found
	I0812 13:01:41.548497  508736 system_pods.go:89] "coredns-6d4b75cb6d-vc452" [58e7b67d-8651-4cb0-b156-41e57c9be638] Running
	I0812 13:01:41.548503  508736 system_pods.go:89] "etcd-test-preload-990043" [d57d28bf-3f68-4859-bf1f-cb3d152551c6] Running
	I0812 13:01:41.548507  508736 system_pods.go:89] "kube-apiserver-test-preload-990043" [2077135e-c5c3-42ea-ba5c-403b25547698] Running
	I0812 13:01:41.548511  508736 system_pods.go:89] "kube-controller-manager-test-preload-990043" [0661d877-e302-4712-851c-1f2b7a5550f2] Running
	I0812 13:01:41.548514  508736 system_pods.go:89] "kube-proxy-z49pr" [983c13ee-40b6-4aab-9703-8f1f1c4dca06] Running
	I0812 13:01:41.548518  508736 system_pods.go:89] "kube-scheduler-test-preload-990043" [737adaa8-dc6e-4501-ab13-fbe3b2d745a1] Running
	I0812 13:01:41.548521  508736 system_pods.go:89] "storage-provisioner" [42d73f1e-8d3c-482c-a057-8f62dfbe94b3] Running
	I0812 13:01:41.548528  508736 system_pods.go:126] duration metric: took 203.022289ms to wait for k8s-apps to be running ...
	I0812 13:01:41.548535  508736 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 13:01:41.548589  508736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 13:01:41.564122  508736 system_svc.go:56] duration metric: took 15.576895ms WaitForService to wait for kubelet
	I0812 13:01:41.564156  508736 kubeadm.go:582] duration metric: took 9.340326748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 13:01:41.564175  508736 node_conditions.go:102] verifying NodePressure condition ...
	I0812 13:01:41.746369  508736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 13:01:41.746397  508736 node_conditions.go:123] node cpu capacity is 2
	I0812 13:01:41.746408  508736 node_conditions.go:105] duration metric: took 182.228764ms to run NodePressure ...
	I0812 13:01:41.746422  508736 start.go:241] waiting for startup goroutines ...
	I0812 13:01:41.746429  508736 start.go:246] waiting for cluster config update ...
	I0812 13:01:41.746439  508736 start.go:255] writing updated cluster config ...
	I0812 13:01:41.746720  508736 ssh_runner.go:195] Run: rm -f paused
	I0812 13:01:41.796745  508736 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0812 13:01:41.799039  508736 out.go:177] 
	W0812 13:01:41.800557  508736 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0812 13:01:41.801915  508736 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0812 13:01:41.803626  508736 out.go:177] * Done! kubectl is now configured to use "test-preload-990043" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.725452162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467702725430016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a9d0285-b591-43b7-bbb3-35078649fc3f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.726098059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=662eb733-d526-4244-8a63-8a317848b0dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.726156001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=662eb733-d526-4244-8a63-8a317848b0dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.726310536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c01f6a785ce021f2ad973190a811d23b5b9ade6b61784a15a2ac5f233ecebda,PodSandboxId:85fd8c0cec9fced1bfdd5eb8632586801c2b436ae17e5cd1fc7776bd9e711000,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723467693770930430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vc452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e7b67d-8651-4cb0-b156-41e57c9be638,},Annotations:map[string]string{io.kubernetes.container.hash: f242ae26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ce4c07a3ad35bcf50e541159cd00d1cef65f6fef6680d4e814188388df338e,PodSandboxId:9b01d73c7ebf0e984b8509b24334f317ff49c205d455a41cf33ddc4b5f30825b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723467686462758686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z49pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 983c13ee-40b6-4aab-9703-8f1f1c4dca06,},Annotations:map[string]string{io.kubernetes.container.hash: 2763aa9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:947f27e716178c12d8f2e32e2062028fc25fef9305ea7b9b2331e40fb98ade13,PodSandboxId:82f17333dedb84256478d2c7131ade6515aefa4adaaaae150842b6d8ea1f27fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723467686156319001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d
73f1e-8d3c-482c-a057-8f62dfbe94b3,},Annotations:map[string]string{io.kubernetes.container.hash: 31a3a46e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcba29e25cb63eee0323870930b056912e6d8cc37803db08f80fea7a6b9b7b75,PodSandboxId:d4d9429f5a5834a4d3192f600d5bb84b24a9136c2a8cc32a57fcb63dd8c0b4e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723467681269950275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c867ec3c5
1777fd7989d98efd46b8d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbf3fc03f7448558fb75c482abb1fb991dc0dd796a78a0d9d6da1dea9c1b1b9,PodSandboxId:f842470efaa7aa6dcfcaaf332a01bc828797ccdac847691edd476e8a70c357f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723467681262933685,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea919c0aefed6d688fbee0233334aaf,},Annotations:map[
string]string{io.kubernetes.container.hash: 92033793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa24094db9ac8e1eaa171762aa14145340b00854562bb31d16333d68add59fb1,PodSandboxId:64123598960d17a0b856b7b67d75f479cec8259a793b115830ec75c333a6a5b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723467681215743945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5ed3db583e614c2aff9f49efb4103f5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427a65a1651705b0a8d1ee7bbe4e0267866ea16682352187a500ac3cbe988016,PodSandboxId:101ac6b2adc17d0f89c116df84ff20ecdee32219221c8680d00ca24df334df0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723467681152571020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614e083f8cb79dfd546af513088c74ce,},Annotations
:map[string]string{io.kubernetes.container.hash: 971b06bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=662eb733-d526-4244-8a63-8a317848b0dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.764164369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fb0859d-109c-4e38-a20a-807ebdd7dd08 name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.764240255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fb0859d-109c-4e38-a20a-807ebdd7dd08 name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.765638218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87611cd9-a866-49f0-b5da-0066fbc3c465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.766221769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467702766198414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87611cd9-a866-49f0-b5da-0066fbc3c465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.766742670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73abd680-6868-46d1-860e-f17084349a6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.766810789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73abd680-6868-46d1-860e-f17084349a6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.767009683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c01f6a785ce021f2ad973190a811d23b5b9ade6b61784a15a2ac5f233ecebda,PodSandboxId:85fd8c0cec9fced1bfdd5eb8632586801c2b436ae17e5cd1fc7776bd9e711000,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723467693770930430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vc452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e7b67d-8651-4cb0-b156-41e57c9be638,},Annotations:map[string]string{io.kubernetes.container.hash: f242ae26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ce4c07a3ad35bcf50e541159cd00d1cef65f6fef6680d4e814188388df338e,PodSandboxId:9b01d73c7ebf0e984b8509b24334f317ff49c205d455a41cf33ddc4b5f30825b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723467686462758686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z49pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 983c13ee-40b6-4aab-9703-8f1f1c4dca06,},Annotations:map[string]string{io.kubernetes.container.hash: 2763aa9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:947f27e716178c12d8f2e32e2062028fc25fef9305ea7b9b2331e40fb98ade13,PodSandboxId:82f17333dedb84256478d2c7131ade6515aefa4adaaaae150842b6d8ea1f27fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723467686156319001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d
73f1e-8d3c-482c-a057-8f62dfbe94b3,},Annotations:map[string]string{io.kubernetes.container.hash: 31a3a46e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcba29e25cb63eee0323870930b056912e6d8cc37803db08f80fea7a6b9b7b75,PodSandboxId:d4d9429f5a5834a4d3192f600d5bb84b24a9136c2a8cc32a57fcb63dd8c0b4e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723467681269950275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c867ec3c5
1777fd7989d98efd46b8d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbf3fc03f7448558fb75c482abb1fb991dc0dd796a78a0d9d6da1dea9c1b1b9,PodSandboxId:f842470efaa7aa6dcfcaaf332a01bc828797ccdac847691edd476e8a70c357f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723467681262933685,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea919c0aefed6d688fbee0233334aaf,},Annotations:map[
string]string{io.kubernetes.container.hash: 92033793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa24094db9ac8e1eaa171762aa14145340b00854562bb31d16333d68add59fb1,PodSandboxId:64123598960d17a0b856b7b67d75f479cec8259a793b115830ec75c333a6a5b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723467681215743945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5ed3db583e614c2aff9f49efb4103f5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427a65a1651705b0a8d1ee7bbe4e0267866ea16682352187a500ac3cbe988016,PodSandboxId:101ac6b2adc17d0f89c116df84ff20ecdee32219221c8680d00ca24df334df0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723467681152571020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614e083f8cb79dfd546af513088c74ce,},Annotations
:map[string]string{io.kubernetes.container.hash: 971b06bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73abd680-6868-46d1-860e-f17084349a6a name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.804236731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54ce19a5-c27a-446d-9e60-d1a91fe9ef7c name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.804312508Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54ce19a5-c27a-446d-9e60-d1a91fe9ef7c name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.805388112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec947594-8cba-4978-ac74-6e5ac788f02f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.805826312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467702805805246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec947594-8cba-4978-ac74-6e5ac788f02f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.806358611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56cff16a-f920-4d25-b08d-e8d5bc5f65ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.806424034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56cff16a-f920-4d25-b08d-e8d5bc5f65ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.806607949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c01f6a785ce021f2ad973190a811d23b5b9ade6b61784a15a2ac5f233ecebda,PodSandboxId:85fd8c0cec9fced1bfdd5eb8632586801c2b436ae17e5cd1fc7776bd9e711000,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723467693770930430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vc452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e7b67d-8651-4cb0-b156-41e57c9be638,},Annotations:map[string]string{io.kubernetes.container.hash: f242ae26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ce4c07a3ad35bcf50e541159cd00d1cef65f6fef6680d4e814188388df338e,PodSandboxId:9b01d73c7ebf0e984b8509b24334f317ff49c205d455a41cf33ddc4b5f30825b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723467686462758686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z49pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 983c13ee-40b6-4aab-9703-8f1f1c4dca06,},Annotations:map[string]string{io.kubernetes.container.hash: 2763aa9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:947f27e716178c12d8f2e32e2062028fc25fef9305ea7b9b2331e40fb98ade13,PodSandboxId:82f17333dedb84256478d2c7131ade6515aefa4adaaaae150842b6d8ea1f27fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723467686156319001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d
73f1e-8d3c-482c-a057-8f62dfbe94b3,},Annotations:map[string]string{io.kubernetes.container.hash: 31a3a46e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcba29e25cb63eee0323870930b056912e6d8cc37803db08f80fea7a6b9b7b75,PodSandboxId:d4d9429f5a5834a4d3192f600d5bb84b24a9136c2a8cc32a57fcb63dd8c0b4e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723467681269950275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c867ec3c5
1777fd7989d98efd46b8d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbf3fc03f7448558fb75c482abb1fb991dc0dd796a78a0d9d6da1dea9c1b1b9,PodSandboxId:f842470efaa7aa6dcfcaaf332a01bc828797ccdac847691edd476e8a70c357f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723467681262933685,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea919c0aefed6d688fbee0233334aaf,},Annotations:map[
string]string{io.kubernetes.container.hash: 92033793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa24094db9ac8e1eaa171762aa14145340b00854562bb31d16333d68add59fb1,PodSandboxId:64123598960d17a0b856b7b67d75f479cec8259a793b115830ec75c333a6a5b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723467681215743945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5ed3db583e614c2aff9f49efb4103f5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427a65a1651705b0a8d1ee7bbe4e0267866ea16682352187a500ac3cbe988016,PodSandboxId:101ac6b2adc17d0f89c116df84ff20ecdee32219221c8680d00ca24df334df0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723467681152571020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614e083f8cb79dfd546af513088c74ce,},Annotations
:map[string]string{io.kubernetes.container.hash: 971b06bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56cff16a-f920-4d25-b08d-e8d5bc5f65ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.841151168Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e838735-a081-4445-a2d7-5af878edcffd name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.841240501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e838735-a081-4445-a2d7-5af878edcffd name=/runtime.v1.RuntimeService/Version
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.842523216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9338bc5c-4d9f-4069-9c2e-f34a75dac2d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.843107711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723467702843082944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9338bc5c-4d9f-4069-9c2e-f34a75dac2d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.843738902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f106d128-ce1e-402c-9d5b-ddd80d8cbe26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.843811232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f106d128-ce1e-402c-9d5b-ddd80d8cbe26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:01:42 test-preload-990043 crio[678]: time="2024-08-12 13:01:42.844010117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c01f6a785ce021f2ad973190a811d23b5b9ade6b61784a15a2ac5f233ecebda,PodSandboxId:85fd8c0cec9fced1bfdd5eb8632586801c2b436ae17e5cd1fc7776bd9e711000,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723467693770930430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-vc452,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e7b67d-8651-4cb0-b156-41e57c9be638,},Annotations:map[string]string{io.kubernetes.container.hash: f242ae26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ce4c07a3ad35bcf50e541159cd00d1cef65f6fef6680d4e814188388df338e,PodSandboxId:9b01d73c7ebf0e984b8509b24334f317ff49c205d455a41cf33ddc4b5f30825b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723467686462758686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z49pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 983c13ee-40b6-4aab-9703-8f1f1c4dca06,},Annotations:map[string]string{io.kubernetes.container.hash: 2763aa9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:947f27e716178c12d8f2e32e2062028fc25fef9305ea7b9b2331e40fb98ade13,PodSandboxId:82f17333dedb84256478d2c7131ade6515aefa4adaaaae150842b6d8ea1f27fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723467686156319001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42d
73f1e-8d3c-482c-a057-8f62dfbe94b3,},Annotations:map[string]string{io.kubernetes.container.hash: 31a3a46e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcba29e25cb63eee0323870930b056912e6d8cc37803db08f80fea7a6b9b7b75,PodSandboxId:d4d9429f5a5834a4d3192f600d5bb84b24a9136c2a8cc32a57fcb63dd8c0b4e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723467681269950275,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c867ec3c5
1777fd7989d98efd46b8d8,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbf3fc03f7448558fb75c482abb1fb991dc0dd796a78a0d9d6da1dea9c1b1b9,PodSandboxId:f842470efaa7aa6dcfcaaf332a01bc828797ccdac847691edd476e8a70c357f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723467681262933685,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea919c0aefed6d688fbee0233334aaf,},Annotations:map[
string]string{io.kubernetes.container.hash: 92033793,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa24094db9ac8e1eaa171762aa14145340b00854562bb31d16333d68add59fb1,PodSandboxId:64123598960d17a0b856b7b67d75f479cec8259a793b115830ec75c333a6a5b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723467681215743945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5ed3db583e614c2aff9f49efb4103f5,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:427a65a1651705b0a8d1ee7bbe4e0267866ea16682352187a500ac3cbe988016,PodSandboxId:101ac6b2adc17d0f89c116df84ff20ecdee32219221c8680d00ca24df334df0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723467681152571020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-990043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614e083f8cb79dfd546af513088c74ce,},Annotations
:map[string]string{io.kubernetes.container.hash: 971b06bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f106d128-ce1e-402c-9d5b-ddd80d8cbe26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c01f6a785ce0       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   85fd8c0cec9fc       coredns-6d4b75cb6d-vc452
	27ce4c07a3ad3       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   9b01d73c7ebf0       kube-proxy-z49pr
	947f27e716178       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   82f17333dedb8       storage-provisioner
	dcba29e25cb63       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   d4d9429f5a583       kube-scheduler-test-preload-990043
	2bbf3fc03f744       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   f842470efaa7a       etcd-test-preload-990043
	aa24094db9ac8       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   64123598960d1       kube-controller-manager-test-preload-990043
	427a65a165170       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   101ac6b2adc17       kube-apiserver-test-preload-990043
	
	
	==> coredns [5c01f6a785ce021f2ad973190a811d23b5b9ade6b61784a15a2ac5f233ecebda] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35404 - 6442 "HINFO IN 1700415874550303935.2154178568504251504. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013523317s
	
	
	==> describe nodes <==
	Name:               test-preload-990043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-990043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=799e8a7dafc4266c6fcf08e799ac850effb94bc5
	                    minikube.k8s.io/name=test-preload-990043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_12T12_59_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 12:59:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-990043
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 13:01:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 13:01:35 +0000   Mon, 12 Aug 2024 12:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 13:01:35 +0000   Mon, 12 Aug 2024 12:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 13:01:35 +0000   Mon, 12 Aug 2024 12:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 13:01:35 +0000   Mon, 12 Aug 2024 13:01:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    test-preload-990043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64256824957c4a5c8bc8074f7e6a95ff
	  System UUID:                64256824-957c-4a5c-8bc8-074f7e6a95ff
	  Boot ID:                    18b03393-1506-4071-996f-40dc0583efe3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vc452                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-test-preload-990043                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-test-preload-990043             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-test-preload-990043    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-z49pr                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-test-preload-990043             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 96s                kube-proxy       
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node test-preload-990043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node test-preload-990043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node test-preload-990043 status is now: NodeHasSufficientPID
	  Normal  NodeReady                100s               kubelet          Node test-preload-990043 status is now: NodeReady
	  Normal  RegisteredNode           99s                node-controller  Node test-preload-990043 event: Registered Node test-preload-990043 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-990043 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-990043 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-990043 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-990043 event: Registered Node test-preload-990043 in Controller
	
	
	==> dmesg <==
	[Aug12 13:00] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051314] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.799881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610051] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.553422] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug12 13:01] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.056487] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054833] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.212178] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.130773] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.301591] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[ +12.921690] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[  +0.061142] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.884868] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +5.608842] kauditd_printk_skb: 105 callbacks suppressed
	[  +6.456202] systemd-fstab-generator[1748]: Ignoring "noauto" option for root device
	[  +0.087220] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.122188] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [2bbf3fc03f7448558fb75c482abb1fb991dc0dd796a78a0d9d6da1dea9c1b1b9] <==
	{"level":"info","ts":"2024-08-12T13:01:21.638Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"38dbae10e7efb596","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-12T13:01:21.641Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-12T13:01:21.643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 switched to configuration voters=(4097059673657554326)"}
	{"level":"info","ts":"2024-08-12T13:01:21.643Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f45b5855e490ef48","local-member-id":"38dbae10e7efb596","added-peer-id":"38dbae10e7efb596","added-peer-peer-urls":["https://192.168.39.105:2380"]}
	{"level":"info","ts":"2024-08-12T13:01:21.645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f45b5855e490ef48","local-member-id":"38dbae10e7efb596","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T13:01:21.649Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T13:01:21.654Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T13:01:21.654Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-08-12T13:01:21.654Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-08-12T13:01:21.655Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"38dbae10e7efb596","initial-advertise-peer-urls":["https://192.168.39.105:2380"],"listen-peer-urls":["https://192.168.39.105:2380"],"advertise-client-urls":["https://192.168.39.105:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.105:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T13:01:21.655Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T13:01:22.808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-12T13:01:22.808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 received MsgPreVoteResp from 38dbae10e7efb596 at term 2"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 received MsgVoteResp from 38dbae10e7efb596 at term 3"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 became leader at term 3"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38dbae10e7efb596 elected leader 38dbae10e7efb596 at term 3"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"38dbae10e7efb596","local-member-attributes":"{Name:test-preload-990043 ClientURLs:[https://192.168.39.105:2379]}","request-path":"/0/members/38dbae10e7efb596/attributes","cluster-id":"f45b5855e490ef48","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T13:01:22.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:01:22.811Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.105:2379"}
	{"level":"info","ts":"2024-08-12T13:01:22.811Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:01:22.812Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T13:01:22.812Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T13:01:22.812Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:01:43 up 0 min,  0 users,  load average: 0.66, 0.20, 0.07
	Linux test-preload-990043 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [427a65a1651705b0a8d1ee7bbe4e0267866ea16682352187a500ac3cbe988016] <==
	I0812 13:01:25.286779       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0812 13:01:25.286899       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0812 13:01:25.286942       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0812 13:01:25.287009       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 13:01:25.301184       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0812 13:01:25.319400       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0812 13:01:25.319527       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0812 13:01:25.365591       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 13:01:25.373830       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0812 13:01:25.374807       1 cache.go:39] Caches are synced for autoregister controller
	I0812 13:01:25.375726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 13:01:25.389505       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0812 13:01:25.393987       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0812 13:01:25.420041       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0812 13:01:25.448651       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0812 13:01:25.965000       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0812 13:01:26.278418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0812 13:01:26.815143       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0812 13:01:27.181383       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0812 13:01:27.197770       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0812 13:01:27.235667       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0812 13:01:27.259762       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 13:01:27.273762       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0812 13:01:38.438559       1 controller.go:611] quota admission added evaluator for: endpoints
	I0812 13:01:38.442434       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aa24094db9ac8e1eaa171762aa14145340b00854562bb31d16333d68add59fb1] <==
	I0812 13:01:38.452030       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0812 13:01:38.452964       1 shared_informer.go:262] Caches are synced for stateful set
	I0812 13:01:38.455626       1 shared_informer.go:262] Caches are synced for taint
	I0812 13:01:38.455634       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0812 13:01:38.456045       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0812 13:01:38.456154       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0812 13:01:38.456272       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-990043. Assuming now as a timestamp.
	I0812 13:01:38.456333       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0812 13:01:38.456547       1 event.go:294] "Event occurred" object="test-preload-990043" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-990043 event: Registered Node test-preload-990043 in Controller"
	I0812 13:01:38.458738       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0812 13:01:38.468303       1 shared_informer.go:262] Caches are synced for daemon sets
	I0812 13:01:38.474992       1 shared_informer.go:262] Caches are synced for GC
	I0812 13:01:38.475074       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0812 13:01:38.476602       1 shared_informer.go:262] Caches are synced for HPA
	I0812 13:01:38.476811       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0812 13:01:38.572593       1 shared_informer.go:262] Caches are synced for crt configmap
	I0812 13:01:38.631511       1 shared_informer.go:262] Caches are synced for expand
	I0812 13:01:38.660504       1 shared_informer.go:262] Caches are synced for resource quota
	I0812 13:01:38.666153       1 shared_informer.go:262] Caches are synced for resource quota
	I0812 13:01:38.674416       1 shared_informer.go:262] Caches are synced for PV protection
	I0812 13:01:38.688671       1 shared_informer.go:262] Caches are synced for persistent volume
	I0812 13:01:38.691989       1 shared_informer.go:262] Caches are synced for attach detach
	I0812 13:01:39.116149       1 shared_informer.go:262] Caches are synced for garbage collector
	I0812 13:01:39.162643       1 shared_informer.go:262] Caches are synced for garbage collector
	I0812 13:01:39.162759       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [27ce4c07a3ad35bcf50e541159cd00d1cef65f6fef6680d4e814188388df338e] <==
	I0812 13:01:26.758938       1 node.go:163] Successfully retrieved node IP: 192.168.39.105
	I0812 13:01:26.759113       1 server_others.go:138] "Detected node IP" address="192.168.39.105"
	I0812 13:01:26.759223       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0812 13:01:26.800917       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0812 13:01:26.801027       1 server_others.go:206] "Using iptables Proxier"
	I0812 13:01:26.801742       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0812 13:01:26.802822       1 server.go:661] "Version info" version="v1.24.4"
	I0812 13:01:26.802972       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 13:01:26.804658       1 config.go:317] "Starting service config controller"
	I0812 13:01:26.805101       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0812 13:01:26.805195       1 config.go:226] "Starting endpoint slice config controller"
	I0812 13:01:26.805220       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0812 13:01:26.806117       1 config.go:444] "Starting node config controller"
	I0812 13:01:26.807989       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0812 13:01:26.905364       1 shared_informer.go:262] Caches are synced for service config
	I0812 13:01:26.905538       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0812 13:01:26.908787       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [dcba29e25cb63eee0323870930b056912e6d8cc37803db08f80fea7a6b9b7b75] <==
	I0812 13:01:21.956316       1 serving.go:348] Generated self-signed cert in-memory
	W0812 13:01:25.330334       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 13:01:25.330399       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 13:01:25.330411       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 13:01:25.330439       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 13:01:25.380062       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0812 13:01:25.380185       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 13:01:25.396576       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0812 13:01:25.396940       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 13:01:25.398112       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 13:01:25.396975       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0812 13:01:25.498248       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.447313    1136 topology_manager.go:200] "Topology Admit Handler"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: E0812 13:01:25.450314    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vc452" podUID=58e7b67d-8651-4cb0-b156-41e57c9be638
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.454355    1136 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-990043"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.454497    1136 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-990043"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.460572    1136 setters.go:532] "Node became not ready" node="test-preload-990043" condition={Type:Ready Status:False LastHeartbeatTime:2024-08-12 13:01:25.460478478 +0000 UTC m=+5.149364284 LastTransitionTime:2024-08-12 13:01:25.460478478 +0000 UTC m=+5.149364284 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: E0812 13:01:25.522107    1136 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529142    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zdlc\" (UniqueName: \"kubernetes.io/projected/983c13ee-40b6-4aab-9703-8f1f1c4dca06-kube-api-access-2zdlc\") pod \"kube-proxy-z49pr\" (UID: \"983c13ee-40b6-4aab-9703-8f1f1c4dca06\") " pod="kube-system/kube-proxy-z49pr"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529193    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxt8\" (UniqueName: \"kubernetes.io/projected/58e7b67d-8651-4cb0-b156-41e57c9be638-kube-api-access-fcxt8\") pod \"coredns-6d4b75cb6d-vc452\" (UID: \"58e7b67d-8651-4cb0-b156-41e57c9be638\") " pod="kube-system/coredns-6d4b75cb6d-vc452"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529214    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42d73f1e-8d3c-482c-a057-8f62dfbe94b3-tmp\") pod \"storage-provisioner\" (UID: \"42d73f1e-8d3c-482c-a057-8f62dfbe94b3\") " pod="kube-system/storage-provisioner"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529237    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983c13ee-40b6-4aab-9703-8f1f1c4dca06-lib-modules\") pod \"kube-proxy-z49pr\" (UID: \"983c13ee-40b6-4aab-9703-8f1f1c4dca06\") " pod="kube-system/kube-proxy-z49pr"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529257    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume\") pod \"coredns-6d4b75cb6d-vc452\" (UID: \"58e7b67d-8651-4cb0-b156-41e57c9be638\") " pod="kube-system/coredns-6d4b75cb6d-vc452"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529275    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swh5q\" (UniqueName: \"kubernetes.io/projected/42d73f1e-8d3c-482c-a057-8f62dfbe94b3-kube-api-access-swh5q\") pod \"storage-provisioner\" (UID: \"42d73f1e-8d3c-482c-a057-8f62dfbe94b3\") " pod="kube-system/storage-provisioner"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529292    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983c13ee-40b6-4aab-9703-8f1f1c4dca06-xtables-lock\") pod \"kube-proxy-z49pr\" (UID: \"983c13ee-40b6-4aab-9703-8f1f1c4dca06\") " pod="kube-system/kube-proxy-z49pr"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529308    1136 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/983c13ee-40b6-4aab-9703-8f1f1c4dca06-kube-proxy\") pod \"kube-proxy-z49pr\" (UID: \"983c13ee-40b6-4aab-9703-8f1f1c4dca06\") " pod="kube-system/kube-proxy-z49pr"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: I0812 13:01:25.529326    1136 reconciler.go:159] "Reconciler: start to sync state"
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: E0812 13:01:25.633159    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 12 13:01:25 test-preload-990043 kubelet[1136]: E0812 13:01:25.633316    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume podName:58e7b67d-8651-4cb0-b156-41e57c9be638 nodeName:}" failed. No retries permitted until 2024-08-12 13:01:26.133246092 +0000 UTC m=+5.822131918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume") pod "coredns-6d4b75cb6d-vc452" (UID: "58e7b67d-8651-4cb0-b156-41e57c9be638") : object "kube-system"/"coredns" not registered
	Aug 12 13:01:26 test-preload-990043 kubelet[1136]: E0812 13:01:26.138393    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 12 13:01:26 test-preload-990043 kubelet[1136]: E0812 13:01:26.138458    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume podName:58e7b67d-8651-4cb0-b156-41e57c9be638 nodeName:}" failed. No retries permitted until 2024-08-12 13:01:27.138443181 +0000 UTC m=+6.827329002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume") pod "coredns-6d4b75cb6d-vc452" (UID: "58e7b67d-8651-4cb0-b156-41e57c9be638") : object "kube-system"/"coredns" not registered
	Aug 12 13:01:27 test-preload-990043 kubelet[1136]: E0812 13:01:27.145604    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 12 13:01:27 test-preload-990043 kubelet[1136]: E0812 13:01:27.145706    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume podName:58e7b67d-8651-4cb0-b156-41e57c9be638 nodeName:}" failed. No retries permitted until 2024-08-12 13:01:29.14568556 +0000 UTC m=+8.834571368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume") pod "coredns-6d4b75cb6d-vc452" (UID: "58e7b67d-8651-4cb0-b156-41e57c9be638") : object "kube-system"/"coredns" not registered
	Aug 12 13:01:27 test-preload-990043 kubelet[1136]: E0812 13:01:27.562310    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vc452" podUID=58e7b67d-8651-4cb0-b156-41e57c9be638
	Aug 12 13:01:29 test-preload-990043 kubelet[1136]: E0812 13:01:29.162303    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 12 13:01:29 test-preload-990043 kubelet[1136]: E0812 13:01:29.162417    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume podName:58e7b67d-8651-4cb0-b156-41e57c9be638 nodeName:}" failed. No retries permitted until 2024-08-12 13:01:33.162396721 +0000 UTC m=+12.851282539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58e7b67d-8651-4cb0-b156-41e57c9be638-config-volume") pod "coredns-6d4b75cb6d-vc452" (UID: "58e7b67d-8651-4cb0-b156-41e57c9be638") : object "kube-system"/"coredns" not registered
	Aug 12 13:01:29 test-preload-990043 kubelet[1136]: E0812 13:01:29.562609    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-vc452" podUID=58e7b67d-8651-4cb0-b156-41e57c9be638
	
	
	==> storage-provisioner [947f27e716178c12d8f2e32e2062028fc25fef9305ea7b9b2331e40fb98ade13] <==
	I0812 13:01:26.250439       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-990043 -n test-preload-990043
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-990043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-990043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-990043
--- FAIL: TestPreload (270.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (410.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m58.504971937s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-399526] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-399526" primary control-plane node in "kubernetes-upgrade-399526" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 13:03:36.807803  510297 out.go:291] Setting OutFile to fd 1 ...
	I0812 13:03:36.808053  510297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:03:36.808061  510297 out.go:304] Setting ErrFile to fd 2...
	I0812 13:03:36.808066  510297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:03:36.808244  510297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 13:03:36.808776  510297 out.go:298] Setting JSON to false
	I0812 13:03:36.809696  510297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17148,"bootTime":1723450669,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 13:03:36.809760  510297 start.go:139] virtualization: kvm guest
	I0812 13:03:36.812740  510297 out.go:177] * [kubernetes-upgrade-399526] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 13:03:36.814808  510297 notify.go:220] Checking for updates...
	I0812 13:03:36.815898  510297 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 13:03:36.818686  510297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 13:03:36.821213  510297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:03:36.823652  510297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 13:03:36.826401  510297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 13:03:36.827806  510297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 13:03:36.829260  510297 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 13:03:36.867585  510297 out.go:177] * Using the kvm2 driver based on user configuration
	I0812 13:03:36.868806  510297 start.go:297] selected driver: kvm2
	I0812 13:03:36.868819  510297 start.go:901] validating driver "kvm2" against <nil>
	I0812 13:03:36.868833  510297 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 13:03:36.869954  510297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:03:36.887024  510297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 13:03:36.909353  510297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 13:03:36.909417  510297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 13:03:36.909707  510297 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 13:03:36.909786  510297 cni.go:84] Creating CNI manager for ""
	I0812 13:03:36.909799  510297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:03:36.909816  510297 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 13:03:36.909918  510297 start.go:340] cluster config:
	{Name:kubernetes-upgrade-399526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-399526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:03:36.910056  510297 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:03:36.911937  510297 out.go:177] * Starting "kubernetes-upgrade-399526" primary control-plane node in "kubernetes-upgrade-399526" cluster
	I0812 13:03:36.913296  510297 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 13:03:36.913342  510297 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 13:03:36.913357  510297 cache.go:56] Caching tarball of preloaded images
	I0812 13:03:36.913453  510297 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 13:03:36.913468  510297 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0812 13:03:36.913882  510297 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/config.json ...
	I0812 13:03:36.913914  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/config.json: {Name:mkc551384b5a73abb12cf717c406184c2fa389d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:03:36.914087  510297 start.go:360] acquireMachinesLock for kubernetes-upgrade-399526: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 13:04:03.106347  510297 start.go:364] duration metric: took 26.192207624s to acquireMachinesLock for "kubernetes-upgrade-399526"
	I0812 13:04:03.106439  510297 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-399526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-399526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 13:04:03.106575  510297 start.go:125] createHost starting for "" (driver="kvm2")
	I0812 13:04:03.108796  510297 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0812 13:04:03.109002  510297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:04:03.109062  510297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:04:03.126446  510297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0812 13:04:03.126866  510297 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:04:03.127485  510297 main.go:141] libmachine: Using API Version  1
	I0812 13:04:03.127514  510297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:04:03.127875  510297 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:04:03.128117  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetMachineName
	I0812 13:04:03.128328  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:03.128484  510297 start.go:159] libmachine.API.Create for "kubernetes-upgrade-399526" (driver="kvm2")
	I0812 13:04:03.128522  510297 client.go:168] LocalClient.Create starting
	I0812 13:04:03.128559  510297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem
	I0812 13:04:03.128609  510297 main.go:141] libmachine: Decoding PEM data...
	I0812 13:04:03.128629  510297 main.go:141] libmachine: Parsing certificate...
	I0812 13:04:03.128695  510297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem
	I0812 13:04:03.128723  510297 main.go:141] libmachine: Decoding PEM data...
	I0812 13:04:03.128739  510297 main.go:141] libmachine: Parsing certificate...
	I0812 13:04:03.128762  510297 main.go:141] libmachine: Running pre-create checks...
	I0812 13:04:03.128782  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .PreCreateCheck
	I0812 13:04:03.129182  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetConfigRaw
	I0812 13:04:03.129627  510297 main.go:141] libmachine: Creating machine...
	I0812 13:04:03.129644  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Create
	I0812 13:04:03.129820  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Creating KVM machine...
	I0812 13:04:03.131026  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found existing default KVM network
	I0812 13:04:03.132215  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.132037  510641 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:6b:2a} reservation:<nil>}
	I0812 13:04:03.133182  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.133076  510641 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000256330}
	I0812 13:04:03.133226  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | created network xml: 
	I0812 13:04:03.133253  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | <network>
	I0812 13:04:03.133269  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   <name>mk-kubernetes-upgrade-399526</name>
	I0812 13:04:03.133285  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   <dns enable='no'/>
	I0812 13:04:03.133325  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   
	I0812 13:04:03.133345  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0812 13:04:03.133356  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |     <dhcp>
	I0812 13:04:03.133373  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0812 13:04:03.133388  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |     </dhcp>
	I0812 13:04:03.133399  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   </ip>
	I0812 13:04:03.133408  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG |   
	I0812 13:04:03.133419  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | </network>
	I0812 13:04:03.133439  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | 
	I0812 13:04:03.138700  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | trying to create private KVM network mk-kubernetes-upgrade-399526 192.168.50.0/24...
	I0812 13:04:03.214091  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | private KVM network mk-kubernetes-upgrade-399526 192.168.50.0/24 created
	I0812 13:04:03.214123  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting up store path in /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526 ...
	I0812 13:04:03.214153  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.214045  510641 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 13:04:03.214171  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Building disk image from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 13:04:03.214239  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Downloading /home/jenkins/minikube-integration/19411-463103/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0812 13:04:03.473191  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.473033  510641 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa...
	I0812 13:04:03.650540  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.650362  510641 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/kubernetes-upgrade-399526.rawdisk...
	I0812 13:04:03.650575  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Writing magic tar header
	I0812 13:04:03.650595  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Writing SSH key tar header
	I0812 13:04:03.650608  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:03.650518  510641 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526 ...
	I0812 13:04:03.650708  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526
	I0812 13:04:03.650737  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526 (perms=drwx------)
	I0812 13:04:03.650779  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube/machines
	I0812 13:04:03.650793  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube/machines (perms=drwxr-xr-x)
	I0812 13:04:03.650805  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103/.minikube (perms=drwxr-xr-x)
	I0812 13:04:03.650818  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins/minikube-integration/19411-463103 (perms=drwxrwxr-x)
	I0812 13:04:03.650831  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 13:04:03.650850  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19411-463103
	I0812 13:04:03.650865  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0812 13:04:03.650880  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0812 13:04:03.650898  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home/jenkins
	I0812 13:04:03.650908  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0812 13:04:03.650922  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Creating domain...
	I0812 13:04:03.650941  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Checking permissions on dir: /home
	I0812 13:04:03.650969  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Skipping /home - not owner
	I0812 13:04:03.652120  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) define libvirt domain using xml: 
	I0812 13:04:03.652147  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) <domain type='kvm'>
	I0812 13:04:03.652158  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <name>kubernetes-upgrade-399526</name>
	I0812 13:04:03.652167  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <memory unit='MiB'>2200</memory>
	I0812 13:04:03.652176  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <vcpu>2</vcpu>
	I0812 13:04:03.652188  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <features>
	I0812 13:04:03.652198  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <acpi/>
	I0812 13:04:03.652208  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <apic/>
	I0812 13:04:03.652225  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <pae/>
	I0812 13:04:03.652237  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     
	I0812 13:04:03.652271  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   </features>
	I0812 13:04:03.652293  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <cpu mode='host-passthrough'>
	I0812 13:04:03.652301  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   
	I0812 13:04:03.652308  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   </cpu>
	I0812 13:04:03.652314  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <os>
	I0812 13:04:03.652321  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <type>hvm</type>
	I0812 13:04:03.652327  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <boot dev='cdrom'/>
	I0812 13:04:03.652334  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <boot dev='hd'/>
	I0812 13:04:03.652340  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <bootmenu enable='no'/>
	I0812 13:04:03.652347  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   </os>
	I0812 13:04:03.652358  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   <devices>
	I0812 13:04:03.652365  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <disk type='file' device='cdrom'>
	I0812 13:04:03.652374  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/boot2docker.iso'/>
	I0812 13:04:03.652381  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <target dev='hdc' bus='scsi'/>
	I0812 13:04:03.652387  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <readonly/>
	I0812 13:04:03.652394  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </disk>
	I0812 13:04:03.652399  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <disk type='file' device='disk'>
	I0812 13:04:03.652411  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0812 13:04:03.652422  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <source file='/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/kubernetes-upgrade-399526.rawdisk'/>
	I0812 13:04:03.652429  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <target dev='hda' bus='virtio'/>
	I0812 13:04:03.652435  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </disk>
	I0812 13:04:03.652442  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <interface type='network'>
	I0812 13:04:03.652449  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <source network='mk-kubernetes-upgrade-399526'/>
	I0812 13:04:03.652475  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <model type='virtio'/>
	I0812 13:04:03.652487  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </interface>
	I0812 13:04:03.652501  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <interface type='network'>
	I0812 13:04:03.652512  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <source network='default'/>
	I0812 13:04:03.652525  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <model type='virtio'/>
	I0812 13:04:03.652537  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </interface>
	I0812 13:04:03.652546  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <serial type='pty'>
	I0812 13:04:03.652558  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <target port='0'/>
	I0812 13:04:03.652570  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </serial>
	I0812 13:04:03.652581  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <console type='pty'>
	I0812 13:04:03.652592  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <target type='serial' port='0'/>
	I0812 13:04:03.652603  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </console>
	I0812 13:04:03.652610  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     <rng model='virtio'>
	I0812 13:04:03.652618  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)       <backend model='random'>/dev/random</backend>
	I0812 13:04:03.652628  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     </rng>
	I0812 13:04:03.652636  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     
	I0812 13:04:03.652656  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)     
	I0812 13:04:03.652666  510297 main.go:141] libmachine: (kubernetes-upgrade-399526)   </devices>
	I0812 13:04:03.652674  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) </domain>
	I0812 13:04:03.652682  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) 
	I0812 13:04:03.657792  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:0e:96:61 in network default
	I0812 13:04:03.658663  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Ensuring networks are active...
	I0812 13:04:03.658697  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:03.659653  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Ensuring network default is active
	I0812 13:04:03.660004  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Ensuring network mk-kubernetes-upgrade-399526 is active
	I0812 13:04:03.660539  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Getting domain xml...
	I0812 13:04:03.661468  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Creating domain...
	I0812 13:04:04.955628  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Waiting to get IP...
	I0812 13:04:04.956396  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:04.956817  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:04.956843  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:04.956781  510641 retry.go:31] will retry after 194.421824ms: waiting for machine to come up
	I0812 13:04:05.153551  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.154182  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.154209  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:05.154123  510641 retry.go:31] will retry after 365.914619ms: waiting for machine to come up
	I0812 13:04:05.521793  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.522484  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.522512  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:05.522419  510641 retry.go:31] will retry after 451.598346ms: waiting for machine to come up
	I0812 13:04:05.975937  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.976469  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:05.976495  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:05.976418  510641 retry.go:31] will retry after 490.122547ms: waiting for machine to come up
	I0812 13:04:06.468100  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:06.468671  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:06.468699  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:06.468576  510641 retry.go:31] will retry after 682.374894ms: waiting for machine to come up
	I0812 13:04:07.152715  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:07.153267  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:07.153295  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:07.153238  510641 retry.go:31] will retry after 616.31912ms: waiting for machine to come up
	I0812 13:04:07.771080  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:07.771589  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:07.771677  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:07.771527  510641 retry.go:31] will retry after 1.167622636s: waiting for machine to come up
	I0812 13:04:08.941320  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:08.941723  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:08.941750  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:08.941690  510641 retry.go:31] will retry after 1.260259797s: waiting for machine to come up
	I0812 13:04:10.203780  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:10.204304  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:10.204332  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:10.204228  510641 retry.go:31] will retry after 1.139970507s: waiting for machine to come up
	I0812 13:04:11.345616  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:11.346090  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:11.346123  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:11.346004  510641 retry.go:31] will retry after 1.397249678s: waiting for machine to come up
	I0812 13:04:12.745829  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:12.746300  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:12.746327  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:12.746259  510641 retry.go:31] will retry after 2.30090487s: waiting for machine to come up
	I0812 13:04:15.048717  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:15.049313  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:15.049347  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:15.049253  510641 retry.go:31] will retry after 3.475261866s: waiting for machine to come up
	I0812 13:04:18.526513  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:18.527034  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:18.527063  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:18.526967  510641 retry.go:31] will retry after 4.330008264s: waiting for machine to come up
	I0812 13:04:22.861545  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:22.861938  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find current IP address of domain kubernetes-upgrade-399526 in network mk-kubernetes-upgrade-399526
	I0812 13:04:22.861971  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | I0812 13:04:22.861892  510641 retry.go:31] will retry after 5.269850807s: waiting for machine to come up
	I0812 13:04:28.133109  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.133600  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Found IP for machine: 192.168.50.194
	I0812 13:04:28.133625  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Reserving static IP address...
	I0812 13:04:28.133644  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has current primary IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.133956  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-399526", mac: "52:54:00:d0:73:2d", ip: "192.168.50.194"} in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.212865  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Getting to WaitForSSH function...
	I0812 13:04:28.212899  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Reserved static IP address: 192.168.50.194
	I0812 13:04:28.212914  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Waiting for SSH to be available...
	I0812 13:04:28.215842  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.216302  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.216337  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.216455  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Using SSH client type: external
	I0812 13:04:28.216501  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Using SSH private key: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa (-rw-------)
	I0812 13:04:28.216533  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0812 13:04:28.216554  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | About to run SSH command:
	I0812 13:04:28.216570  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | exit 0
	I0812 13:04:28.341206  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | SSH cmd err, output: <nil>: 
	I0812 13:04:28.341479  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) KVM machine creation complete!
	I0812 13:04:28.341869  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetConfigRaw
	I0812 13:04:28.342376  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:28.342684  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:28.342889  510297 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0812 13:04:28.342901  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetState
	I0812 13:04:28.344277  510297 main.go:141] libmachine: Detecting operating system of created instance...
	I0812 13:04:28.344295  510297 main.go:141] libmachine: Waiting for SSH to be available...
	I0812 13:04:28.344301  510297 main.go:141] libmachine: Getting to WaitForSSH function...
	I0812 13:04:28.344307  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.346700  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.347129  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.347159  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.347273  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:28.347483  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.347634  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.347789  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:28.347940  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:28.348189  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:28.348203  510297 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0812 13:04:28.452675  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 13:04:28.452700  510297 main.go:141] libmachine: Detecting the provisioner...
	I0812 13:04:28.452711  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.455733  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.456181  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.456210  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.456399  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:28.456602  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.456759  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.456943  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:28.457138  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:28.457349  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:28.457363  510297 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0812 13:04:28.562500  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0812 13:04:28.562645  510297 main.go:141] libmachine: found compatible host: buildroot
	I0812 13:04:28.562661  510297 main.go:141] libmachine: Provisioning with buildroot...
	I0812 13:04:28.562673  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetMachineName
	I0812 13:04:28.562979  510297 buildroot.go:166] provisioning hostname "kubernetes-upgrade-399526"
	I0812 13:04:28.563004  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetMachineName
	I0812 13:04:28.563238  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.566370  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.566750  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.566790  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.566967  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:28.567194  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.567373  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.567521  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:28.567691  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:28.567904  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:28.567918  510297 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-399526 && echo "kubernetes-upgrade-399526" | sudo tee /etc/hostname
	I0812 13:04:28.688933  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-399526
	
	I0812 13:04:28.688962  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.691800  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.692294  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.692329  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.692506  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:28.692792  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.692941  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.693101  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:28.693279  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:28.693514  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:28.693533  510297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-399526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-399526/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-399526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 13:04:28.807058  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0812 13:04:28.807101  510297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19411-463103/.minikube CaCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19411-463103/.minikube}
	I0812 13:04:28.807163  510297 buildroot.go:174] setting up certificates
	I0812 13:04:28.807184  510297 provision.go:84] configureAuth start
	I0812 13:04:28.807203  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetMachineName
	I0812 13:04:28.807567  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetIP
	I0812 13:04:28.810674  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.810987  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.811015  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.811204  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.813860  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.814209  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.814238  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.814489  510297 provision.go:143] copyHostCerts
	I0812 13:04:28.814597  510297 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem, removing ...
	I0812 13:04:28.814610  510297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem
	I0812 13:04:28.814667  510297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/ca.pem (1078 bytes)
	I0812 13:04:28.814806  510297 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem, removing ...
	I0812 13:04:28.814818  510297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem
	I0812 13:04:28.814840  510297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/cert.pem (1123 bytes)
	I0812 13:04:28.814907  510297 exec_runner.go:144] found /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem, removing ...
	I0812 13:04:28.814916  510297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem
	I0812 13:04:28.814934  510297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19411-463103/.minikube/key.pem (1679 bytes)
	I0812 13:04:28.814994  510297 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-399526 san=[127.0.0.1 192.168.50.194 kubernetes-upgrade-399526 localhost minikube]
	I0812 13:04:28.898316  510297 provision.go:177] copyRemoteCerts
	I0812 13:04:28.898378  510297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 13:04:28.898407  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:28.901485  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.901953  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:28.901983  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:28.902269  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:28.902510  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:28.902691  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:28.902837  510297 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:04:28.984242  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0812 13:04:29.011109  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0812 13:04:29.039377  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0812 13:04:29.064190  510297 provision.go:87] duration metric: took 256.985057ms to configureAuth
	I0812 13:04:29.064228  510297 buildroot.go:189] setting minikube options for container-runtime
	I0812 13:04:29.064477  510297 config.go:182] Loaded profile config "kubernetes-upgrade-399526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0812 13:04:29.064592  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:29.067346  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.067715  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.067748  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.067906  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:29.068116  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.068297  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.068450  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:29.068628  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:29.068816  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:29.068832  510297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 13:04:29.351490  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 13:04:29.351523  510297 main.go:141] libmachine: Checking connection to Docker...
	I0812 13:04:29.351535  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetURL
	I0812 13:04:29.352807  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Using libvirt version 6000000
	I0812 13:04:29.355385  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.355773  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.355806  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.356044  510297 main.go:141] libmachine: Docker is up and running!
	I0812 13:04:29.356058  510297 main.go:141] libmachine: Reticulating splines...
	I0812 13:04:29.356065  510297 client.go:171] duration metric: took 26.227533928s to LocalClient.Create
	I0812 13:04:29.356088  510297 start.go:167] duration metric: took 26.227606088s to libmachine.API.Create "kubernetes-upgrade-399526"
	I0812 13:04:29.356097  510297 start.go:293] postStartSetup for "kubernetes-upgrade-399526" (driver="kvm2")
	I0812 13:04:29.356109  510297 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 13:04:29.356126  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:29.356440  510297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 13:04:29.356468  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:29.359239  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.359667  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.359722  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.359886  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:29.360117  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.360300  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:29.360439  510297 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:04:29.444239  510297 ssh_runner.go:195] Run: cat /etc/os-release
	I0812 13:04:29.449030  510297 info.go:137] Remote host: Buildroot 2023.02.9
	I0812 13:04:29.449064  510297 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/addons for local assets ...
	I0812 13:04:29.449150  510297 filesync.go:126] Scanning /home/jenkins/minikube-integration/19411-463103/.minikube/files for local assets ...
	I0812 13:04:29.449227  510297 filesync.go:149] local asset: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem -> 4703752.pem in /etc/ssl/certs
	I0812 13:04:29.449326  510297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0812 13:04:29.458993  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 13:04:29.485389  510297 start.go:296] duration metric: took 129.272705ms for postStartSetup
	I0812 13:04:29.485456  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetConfigRaw
	I0812 13:04:29.486198  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetIP
	I0812 13:04:29.488908  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.489238  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.489286  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.489502  510297 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/config.json ...
	I0812 13:04:29.489765  510297 start.go:128] duration metric: took 26.383173456s to createHost
	I0812 13:04:29.489792  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:29.492058  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.492437  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.492487  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.492573  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:29.492820  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.493026  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.493183  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:29.493370  510297 main.go:141] libmachine: Using SSH client type: native
	I0812 13:04:29.493554  510297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.194 22 <nil> <nil>}
	I0812 13:04:29.493565  510297 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0812 13:04:29.598829  510297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723467869.562535819
	
	I0812 13:04:29.598857  510297 fix.go:216] guest clock: 1723467869.562535819
	I0812 13:04:29.598865  510297 fix.go:229] Guest: 2024-08-12 13:04:29.562535819 +0000 UTC Remote: 2024-08-12 13:04:29.489777448 +0000 UTC m=+52.731574148 (delta=72.758371ms)
	I0812 13:04:29.598886  510297 fix.go:200] guest clock delta is within tolerance: 72.758371ms
	I0812 13:04:29.598891  510297 start.go:83] releasing machines lock for "kubernetes-upgrade-399526", held for 26.492509196s
	I0812 13:04:29.598917  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:29.599230  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetIP
	I0812 13:04:29.602439  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.602868  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.602902  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.603078  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:29.603711  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:29.603939  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:04:29.604072  510297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0812 13:04:29.604135  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:29.604410  510297 ssh_runner.go:195] Run: cat /version.json
	I0812 13:04:29.604437  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:04:29.607567  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.607761  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.607972  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.608006  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.608114  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:29.608145  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:29.608170  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:29.608340  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:04:29.608454  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.608526  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:04:29.608624  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:29.608826  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:04:29.608826  510297 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:04:29.608983  510297 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:04:29.722880  510297 ssh_runner.go:195] Run: systemctl --version
	I0812 13:04:29.729296  510297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0812 13:04:29.889394  510297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0812 13:04:29.897077  510297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0812 13:04:29.897202  510297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0812 13:04:29.914605  510297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0812 13:04:29.914629  510297 start.go:495] detecting cgroup driver to use...
	I0812 13:04:29.914704  510297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0812 13:04:29.936145  510297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0812 13:04:29.953662  510297 docker.go:217] disabling cri-docker service (if available) ...
	I0812 13:04:29.953734  510297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0812 13:04:29.968153  510297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0812 13:04:29.983329  510297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0812 13:04:30.116681  510297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0812 13:04:30.293804  510297 docker.go:233] disabling docker service ...
	I0812 13:04:30.293884  510297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0812 13:04:30.310002  510297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0812 13:04:30.324428  510297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0812 13:04:30.455295  510297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0812 13:04:30.567303  510297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0812 13:04:30.582030  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 13:04:30.603697  510297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0812 13:04:30.603795  510297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:04:30.615882  510297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0812 13:04:30.615989  510297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:04:30.627659  510297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:04:30.639367  510297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0812 13:04:30.651442  510297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0812 13:04:30.663251  510297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 13:04:30.674146  510297 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 13:04:30.674232  510297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0812 13:04:30.689217  510297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 13:04:30.700443  510297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:04:30.829283  510297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0812 13:04:30.987397  510297 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 13:04:30.987486  510297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0812 13:04:30.992642  510297 start.go:563] Will wait 60s for crictl version
	I0812 13:04:30.992716  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:30.996837  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0812 13:04:31.044093  510297 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0812 13:04:31.044187  510297 ssh_runner.go:195] Run: crio --version
	I0812 13:04:31.084512  510297 ssh_runner.go:195] Run: crio --version
	I0812 13:04:31.126920  510297 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0812 13:04:31.128240  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetIP
	I0812 13:04:31.131499  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:31.131911  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:04:18 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:04:31.131945  510297 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:04:31.132219  510297 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0812 13:04:31.136915  510297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 13:04:31.151664  510297 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-399526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-399526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0812 13:04:31.151790  510297 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 13:04:31.151831  510297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 13:04:31.184982  510297 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 13:04:31.185067  510297 ssh_runner.go:195] Run: which lz4
	I0812 13:04:31.189105  510297 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0812 13:04:31.193337  510297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0812 13:04:31.193385  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0812 13:04:32.940868  510297 crio.go:462] duration metric: took 1.751819802s to copy over tarball
	I0812 13:04:32.940972  510297 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0812 13:04:35.771562  510297 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.830541779s)
	I0812 13:04:35.771605  510297 crio.go:469] duration metric: took 2.830693185s to extract the tarball
	I0812 13:04:35.771616  510297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0812 13:04:35.821903  510297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0812 13:04:35.879740  510297 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0812 13:04:35.879775  510297 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0812 13:04:35.879840  510297 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:35.879862  510297 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:04:35.879912  510297 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:35.879959  510297 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:35.880121  510297 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:35.880145  510297 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0812 13:04:35.880357  510297 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0812 13:04:35.880362  510297 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:35.881724  510297 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:35.881947  510297 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0812 13:04:35.882874  510297 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0812 13:04:35.882936  510297 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:35.882969  510297 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:35.882961  510297 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:35.882873  510297 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:04:35.883498  510297 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.082478  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0812 13:04:36.141105  510297 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0812 13:04:36.141159  510297 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0812 13:04:36.141216  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.143246  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.146023  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 13:04:36.198273  510297 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0812 13:04:36.198329  510297 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.198387  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.213752  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.213869  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 13:04:36.237899  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:36.238823  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:36.240750  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:36.256861  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0812 13:04:36.257125  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:36.315051  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.315258  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0812 13:04:36.431620  510297 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0812 13:04:36.431663  510297 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:36.431706  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.438413  510297 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0812 13:04:36.438476  510297 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:36.438430  510297 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0812 13:04:36.438532  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.438560  510297 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:36.438610  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.470987  510297 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0812 13:04:36.471023  510297 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0812 13:04:36.471046  510297 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0812 13:04:36.471062  510297 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:36.471100  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.471108  510297 ssh_runner.go:195] Run: which crictl
	I0812 13:04:36.475298  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0812 13:04:36.483428  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0812 13:04:36.483472  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:36.483538  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:36.483553  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:36.490704  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 13:04:36.490735  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:36.575855  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0812 13:04:36.625397  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:36.625450  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:36.625534  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:36.625582  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 13:04:36.625604  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:36.722082  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0812 13:04:36.745298  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0812 13:04:36.745383  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0812 13:04:36.745420  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0812 13:04:36.745436  510297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0812 13:04:36.785292  510297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:04:36.790734  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0812 13:04:36.845467  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0812 13:04:36.865361  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0812 13:04:36.865392  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0812 13:04:36.869566  510297 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0812 13:04:36.989583  510297 cache_images.go:92] duration metric: took 1.10978872s to LoadCachedImages
	W0812 13:04:36.989698  510297 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19411-463103/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0812 13:04:36.989716  510297 kubeadm.go:934] updating node { 192.168.50.194 8443 v1.20.0 crio true true} ...
	I0812 13:04:36.989868  510297 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-399526 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-399526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0812 13:04:36.989976  510297 ssh_runner.go:195] Run: crio config
	I0812 13:04:37.052866  510297 cni.go:84] Creating CNI manager for ""
	I0812 13:04:37.052897  510297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:04:37.052908  510297 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0812 13:04:37.052935  510297 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.194 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-399526 NodeName:kubernetes-upgrade-399526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0812 13:04:37.053153  510297 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-399526"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 13:04:37.053236  510297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0812 13:04:37.064521  510297 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 13:04:37.064611  510297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 13:04:37.076616  510297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0812 13:04:37.097512  510297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 13:04:37.123218  510297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0812 13:04:37.145745  510297 ssh_runner.go:195] Run: grep 192.168.50.194	control-plane.minikube.internal$ /etc/hosts
	I0812 13:04:37.150235  510297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 13:04:37.164586  510297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:04:37.288468  510297 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 13:04:37.307802  510297 certs.go:68] Setting up /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526 for IP: 192.168.50.194
	I0812 13:04:37.307828  510297 certs.go:194] generating shared ca certs ...
	I0812 13:04:37.307844  510297 certs.go:226] acquiring lock for ca certs: {Name:mk6de8304278a3baa72e9224be69e469723cb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.308038  510297 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key
	I0812 13:04:37.308088  510297 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key
	I0812 13:04:37.308102  510297 certs.go:256] generating profile certs ...
	I0812 13:04:37.308167  510297 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.key
	I0812 13:04:37.308203  510297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.crt with IP's: []
	I0812 13:04:37.356999  510297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.crt ...
	I0812 13:04:37.357032  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.crt: {Name:mk69a7b38e14e45086ad685f31d2e830607ace97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.386050  510297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.key ...
	I0812 13:04:37.386101  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.key: {Name:mk89da5ddef2d1da8e8580a8cc3fb2585d003123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.386251  510297 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key.5c07e52b
	I0812 13:04:37.386272  510297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt.5c07e52b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.194]
	I0812 13:04:37.558186  510297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt.5c07e52b ...
	I0812 13:04:37.558226  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt.5c07e52b: {Name:mk9e06daaffd77d71e3cd6aad8d23316b82c00fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.558412  510297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key.5c07e52b ...
	I0812 13:04:37.558429  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key.5c07e52b: {Name:mk5a62e951ce0c5282c4a49f6eb3e2fb71c653d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.558541  510297 certs.go:381] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt.5c07e52b -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt
	I0812 13:04:37.558640  510297 certs.go:385] copying /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key.5c07e52b -> /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key
	I0812 13:04:37.558720  510297 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.key
	I0812 13:04:37.558742  510297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.crt with IP's: []
	I0812 13:04:37.640215  510297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.crt ...
	I0812 13:04:37.640264  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.crt: {Name:mkd3f41d1015ce9c288fe45ba4d2961a56edffa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.640474  510297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.key ...
	I0812 13:04:37.640496  510297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.key: {Name:mk7e128165baf788d136d674f64ae535144def0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:04:37.640809  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem (1338 bytes)
	W0812 13:04:37.640875  510297 certs.go:480] ignoring /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375_empty.pem, impossibly tiny 0 bytes
	I0812 13:04:37.640892  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 13:04:37.640931  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/ca.pem (1078 bytes)
	I0812 13:04:37.640969  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/cert.pem (1123 bytes)
	I0812 13:04:37.641016  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/certs/key.pem (1679 bytes)
	I0812 13:04:37.641113  510297 certs.go:484] found cert: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem (1708 bytes)
	I0812 13:04:37.641802  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 13:04:37.675533  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 13:04:37.701739  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 13:04:37.728582  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0812 13:04:37.752872  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0812 13:04:37.804235  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0812 13:04:37.832222  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 13:04:37.860783  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 13:04:37.889250  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/ssl/certs/4703752.pem --> /usr/share/ca-certificates/4703752.pem (1708 bytes)
	I0812 13:04:37.925696  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 13:04:37.960384  510297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19411-463103/.minikube/certs/470375.pem --> /usr/share/ca-certificates/470375.pem (1338 bytes)
	I0812 13:04:37.991123  510297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 13:04:38.012814  510297 ssh_runner.go:195] Run: openssl version
	I0812 13:04:38.020980  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4703752.pem && ln -fs /usr/share/ca-certificates/4703752.pem /etc/ssl/certs/4703752.pem"
	I0812 13:04:38.036275  510297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4703752.pem
	I0812 13:04:38.042479  510297 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 12 12:07 /usr/share/ca-certificates/4703752.pem
	I0812 13:04:38.042548  510297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4703752.pem
	I0812 13:04:38.050258  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4703752.pem /etc/ssl/certs/3ec20f2e.0"
	I0812 13:04:38.062821  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 13:04:38.075161  510297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:04:38.080078  510297 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 12 11:27 /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:04:38.080152  510297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 13:04:38.086896  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 13:04:38.098141  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/470375.pem && ln -fs /usr/share/ca-certificates/470375.pem /etc/ssl/certs/470375.pem"
	I0812 13:04:38.110061  510297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/470375.pem
	I0812 13:04:38.114852  510297 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 12 12:07 /usr/share/ca-certificates/470375.pem
	I0812 13:04:38.114931  510297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/470375.pem
	I0812 13:04:38.120834  510297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/470375.pem /etc/ssl/certs/51391683.0"
	I0812 13:04:38.136155  510297 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0812 13:04:38.141809  510297 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0812 13:04:38.141893  510297 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-399526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-399526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:04:38.141993  510297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 13:04:38.142085  510297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 13:04:38.188671  510297 cri.go:89] found id: ""
	I0812 13:04:38.188773  510297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 13:04:38.202395  510297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 13:04:38.212785  510297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 13:04:38.222858  510297 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 13:04:38.222885  510297 kubeadm.go:157] found existing configuration files:
	
	I0812 13:04:38.222940  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 13:04:38.242546  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 13:04:38.242628  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 13:04:38.257956  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 13:04:38.276024  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 13:04:38.276100  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 13:04:38.294167  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 13:04:38.309438  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 13:04:38.309520  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 13:04:38.330241  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 13:04:38.340238  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 13:04:38.340313  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 13:04:38.350693  510297 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 13:04:38.489117  510297 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 13:04:38.489243  510297 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 13:04:38.638691  510297 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 13:04:38.638840  510297 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 13:04:38.638951  510297 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 13:04:38.837945  510297 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 13:04:39.030473  510297 out.go:204]   - Generating certificates and keys ...
	I0812 13:04:39.030616  510297 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 13:04:39.030703  510297 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 13:04:39.030817  510297 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0812 13:04:39.093988  510297 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0812 13:04:39.266544  510297 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0812 13:04:39.390716  510297 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0812 13:04:39.459525  510297 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0812 13:04:39.459746  510297 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	I0812 13:04:39.866889  510297 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0812 13:04:39.867257  510297 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	I0812 13:04:39.916438  510297 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0812 13:04:40.216844  510297 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0812 13:04:40.441077  510297 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0812 13:04:40.441196  510297 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 13:04:40.730983  510297 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 13:04:40.963741  510297 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 13:04:41.087001  510297 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 13:04:41.212471  510297 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 13:04:41.240979  510297 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 13:04:41.242126  510297 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 13:04:41.242207  510297 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 13:04:41.396943  510297 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 13:04:41.398693  510297 out.go:204]   - Booting up control plane ...
	I0812 13:04:41.398871  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 13:04:41.410479  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 13:04:41.412099  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 13:04:41.413435  510297 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 13:04:41.419062  510297 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 13:05:21.405578  510297 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 13:05:21.405787  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:05:21.406066  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:05:26.406601  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:05:26.406836  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:05:36.406265  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:05:36.406557  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:05:56.406198  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:05:56.406458  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:06:36.406953  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:06:36.407267  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:06:36.407280  510297 kubeadm.go:310] 
	I0812 13:06:36.407331  510297 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 13:06:36.407388  510297 kubeadm.go:310] 		timed out waiting for the condition
	I0812 13:06:36.407398  510297 kubeadm.go:310] 
	I0812 13:06:36.407447  510297 kubeadm.go:310] 	This error is likely caused by:
	I0812 13:06:36.407500  510297 kubeadm.go:310] 		- The kubelet is not running
	I0812 13:06:36.407621  510297 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 13:06:36.407631  510297 kubeadm.go:310] 
	I0812 13:06:36.407752  510297 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 13:06:36.407805  510297 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 13:06:36.407884  510297 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 13:06:36.407917  510297 kubeadm.go:310] 
	I0812 13:06:36.408077  510297 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 13:06:36.408189  510297 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 13:06:36.408204  510297 kubeadm.go:310] 
	I0812 13:06:36.408338  510297 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 13:06:36.408450  510297 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 13:06:36.408560  510297 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 13:06:36.408669  510297 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 13:06:36.408680  510297 kubeadm.go:310] 
	I0812 13:06:36.409446  510297 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 13:06:36.409600  510297 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 13:06:36.409706  510297 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0812 13:06:36.409924  510297 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-399526 localhost] and IPs [192.168.50.194 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0812 13:06:36.409994  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0812 13:06:38.400226  510297 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.990193906s)
	I0812 13:06:38.400339  510297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 13:06:38.416064  510297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 13:06:38.427233  510297 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 13:06:38.427257  510297 kubeadm.go:157] found existing configuration files:
	
	I0812 13:06:38.427321  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0812 13:06:38.442727  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0812 13:06:38.442819  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0812 13:06:38.455031  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0812 13:06:38.464651  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0812 13:06:38.464729  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0812 13:06:38.479252  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0812 13:06:38.491898  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0812 13:06:38.491964  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0812 13:06:38.502100  510297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0812 13:06:38.512150  510297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0812 13:06:38.512232  510297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0812 13:06:38.522888  510297 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0812 13:06:38.601401  510297 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0812 13:06:38.601534  510297 kubeadm.go:310] [preflight] Running pre-flight checks
	I0812 13:06:38.762221  510297 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0812 13:06:38.762404  510297 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0812 13:06:38.762550  510297 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0812 13:06:38.962164  510297 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0812 13:06:38.964281  510297 out.go:204]   - Generating certificates and keys ...
	I0812 13:06:38.964394  510297 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0812 13:06:38.964458  510297 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0812 13:06:38.964597  510297 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0812 13:06:38.964711  510297 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0812 13:06:38.964814  510297 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0812 13:06:38.964889  510297 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0812 13:06:38.965008  510297 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0812 13:06:38.965474  510297 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0812 13:06:38.966072  510297 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0812 13:06:38.966448  510297 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0812 13:06:38.966547  510297 kubeadm.go:310] [certs] Using the existing "sa" key
	I0812 13:06:38.966633  510297 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0812 13:06:39.118317  510297 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0812 13:06:39.243448  510297 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0812 13:06:39.358611  510297 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0812 13:06:39.436474  510297 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0812 13:06:39.454560  510297 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0812 13:06:39.459080  510297 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0812 13:06:39.459165  510297 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0812 13:06:39.597396  510297 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0812 13:06:39.599089  510297 out.go:204]   - Booting up control plane ...
	I0812 13:06:39.599235  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0812 13:06:39.604641  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0812 13:06:39.605732  510297 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0812 13:06:39.606541  510297 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0812 13:06:39.608644  510297 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0812 13:07:19.607376  510297 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0812 13:07:19.607542  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:07:19.607760  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:07:24.608271  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:07:24.608535  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:07:34.608744  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:07:34.609039  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:07:54.609109  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:07:54.609386  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:08:34.611347  510297 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0812 13:08:34.611869  510297 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0812 13:08:34.611921  510297 kubeadm.go:310] 
	I0812 13:08:34.612031  510297 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0812 13:08:34.612115  510297 kubeadm.go:310] 		timed out waiting for the condition
	I0812 13:08:34.612133  510297 kubeadm.go:310] 
	I0812 13:08:34.612234  510297 kubeadm.go:310] 	This error is likely caused by:
	I0812 13:08:34.612279  510297 kubeadm.go:310] 		- The kubelet is not running
	I0812 13:08:34.612413  510297 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0812 13:08:34.612423  510297 kubeadm.go:310] 
	I0812 13:08:34.612581  510297 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0812 13:08:34.612640  510297 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0812 13:08:34.612669  510297 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0812 13:08:34.612679  510297 kubeadm.go:310] 
	I0812 13:08:34.612824  510297 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0812 13:08:34.612923  510297 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0812 13:08:34.612932  510297 kubeadm.go:310] 
	I0812 13:08:34.613064  510297 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0812 13:08:34.613214  510297 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0812 13:08:34.613314  510297 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0812 13:08:34.613407  510297 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0812 13:08:34.613438  510297 kubeadm.go:310] 
	I0812 13:08:34.613580  510297 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0812 13:08:34.613680  510297 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0812 13:08:34.613852  510297 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0812 13:08:34.613855  510297 kubeadm.go:394] duration metric: took 3m56.471969895s to StartCluster
	I0812 13:08:34.613966  510297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0812 13:08:34.614052  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0812 13:08:34.664914  510297 cri.go:89] found id: ""
	I0812 13:08:34.664951  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.664964  510297 logs.go:278] No container was found matching "kube-apiserver"
	I0812 13:08:34.664971  510297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0812 13:08:34.665037  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0812 13:08:34.703255  510297 cri.go:89] found id: ""
	I0812 13:08:34.703290  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.703303  510297 logs.go:278] No container was found matching "etcd"
	I0812 13:08:34.703327  510297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0812 13:08:34.703383  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0812 13:08:34.739598  510297 cri.go:89] found id: ""
	I0812 13:08:34.739631  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.739643  510297 logs.go:278] No container was found matching "coredns"
	I0812 13:08:34.739651  510297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0812 13:08:34.739731  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0812 13:08:34.780801  510297 cri.go:89] found id: ""
	I0812 13:08:34.780836  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.780847  510297 logs.go:278] No container was found matching "kube-scheduler"
	I0812 13:08:34.780855  510297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0812 13:08:34.780931  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0812 13:08:34.816355  510297 cri.go:89] found id: ""
	I0812 13:08:34.816392  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.816406  510297 logs.go:278] No container was found matching "kube-proxy"
	I0812 13:08:34.816415  510297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0812 13:08:34.816478  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0812 13:08:34.862080  510297 cri.go:89] found id: ""
	I0812 13:08:34.862113  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.862124  510297 logs.go:278] No container was found matching "kube-controller-manager"
	I0812 13:08:34.862131  510297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0812 13:08:34.862204  510297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0812 13:08:34.902072  510297 cri.go:89] found id: ""
	I0812 13:08:34.902103  510297 logs.go:276] 0 containers: []
	W0812 13:08:34.902118  510297 logs.go:278] No container was found matching "kindnet"
	I0812 13:08:34.902143  510297 logs.go:123] Gathering logs for kubelet ...
	I0812 13:08:34.902164  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0812 13:08:34.957776  510297 logs.go:123] Gathering logs for dmesg ...
	I0812 13:08:34.957815  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0812 13:08:34.971671  510297 logs.go:123] Gathering logs for describe nodes ...
	I0812 13:08:34.971696  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0812 13:08:35.099332  510297 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0812 13:08:35.099361  510297 logs.go:123] Gathering logs for CRI-O ...
	I0812 13:08:35.099380  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0812 13:08:35.203347  510297 logs.go:123] Gathering logs for container status ...
	I0812 13:08:35.203400  510297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0812 13:08:35.245382  510297 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0812 13:08:35.245426  510297 out.go:239] * 
	* 
	W0812 13:08:35.245484  510297 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 13:08:35.245507  510297 out.go:239] * 
	* 
	W0812 13:08:35.246840  510297 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0812 13:08:35.250437  510297 out.go:177] 
	W0812 13:08:35.251596  510297 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0812 13:08:35.251643  510297 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0812 13:08:35.251664  510297 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0812 13:08:35.252973  510297 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-399526
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-399526: (1.471557648s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-399526 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-399526 status --format={{.Host}}: exit status 7 (69.242741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.562434216s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-399526 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.002115ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-399526] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-399526
	    minikube start -p kubernetes-upgrade-399526 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3995262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-399526 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-399526 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.249426546s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-12 13:10:23.816071105 +0000 UTC m=+6334.757443057
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-399526 -n kubernetes-upgrade-399526
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-399526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-399526 logs -n 25: (1.802640459s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-395896 sudo           | NoKubernetes-395896       | jenkins | v1.33.1 | 12 Aug 24 13:06 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-395896                | NoKubernetes-395896       | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:07 UTC |
	| start   | -p NoKubernetes-395896                | NoKubernetes-395896       | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:07 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-563509             | running-upgrade-563509    | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:07 UTC |
	| start   | -p force-systemd-flag-914561          | force-systemd-flag-914561 | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:08 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-395896 sudo           | NoKubernetes-395896       | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-395896                | NoKubernetes-395896       | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:07 UTC |
	| start   | -p cert-options-977658                | cert-options-977658       | jenkins | v1.33.1 | 12 Aug 24 13:07 UTC | 12 Aug 24 13:08 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-914561 ssh cat     | force-systemd-flag-914561 | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-914561          | force-systemd-flag-914561 | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	| start   | -p stopped-upgrade-421827             | minikube                  | jenkins | v1.26.0 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:09 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-977658 ssh               | cert-options-977658       | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-977658 -- sudo        | cert-options-977658       | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-977658                | cert-options-977658       | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	| start   | -p pause-752920 --memory=2048         | pause-752920              | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:10 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-399526          | kubernetes-upgrade-399526 | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:08 UTC |
	| start   | -p kubernetes-upgrade-399526          | kubernetes-upgrade-399526 | jenkins | v1.33.1 | 12 Aug 24 13:08 UTC | 12 Aug 24 13:09 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-421827 stop           | minikube                  | jenkins | v1.26.0 | 12 Aug 24 13:09 UTC | 12 Aug 24 13:09 UTC |
	| start   | -p stopped-upgrade-421827             | stopped-upgrade-421827    | jenkins | v1.33.1 | 12 Aug 24 13:09 UTC | 12 Aug 24 13:10 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-399526          | kubernetes-upgrade-399526 | jenkins | v1.33.1 | 12 Aug 24 13:09 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-399526          | kubernetes-upgrade-399526 | jenkins | v1.33.1 | 12 Aug 24 13:09 UTC | 12 Aug 24 13:10 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-421827             | stopped-upgrade-421827    | jenkins | v1.33.1 | 12 Aug 24 13:10 UTC | 12 Aug 24 13:10 UTC |
	| start   | -p auto-620755 --memory=3072          | auto-620755               | jenkins | v1.33.1 | 12 Aug 24 13:10 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-993047             | cert-expiration-993047    | jenkins | v1.33.1 | 12 Aug 24 13:10 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-752920                       | pause-752920              | jenkins | v1.33.1 | 12 Aug 24 13:10 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 13:10:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 13:10:16.234381  518075 out.go:291] Setting OutFile to fd 1 ...
	I0812 13:10:16.234695  518075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:10:16.234707  518075 out.go:304] Setting ErrFile to fd 2...
	I0812 13:10:16.234713  518075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 13:10:16.234942  518075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 13:10:16.235529  518075 out.go:298] Setting JSON to false
	I0812 13:10:16.236579  518075 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17547,"bootTime":1723450669,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 13:10:16.236649  518075 start.go:139] virtualization: kvm guest
	I0812 13:10:16.238993  518075 out.go:177] * [pause-752920] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 13:10:16.240353  518075 notify.go:220] Checking for updates...
	I0812 13:10:16.240388  518075 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 13:10:16.242077  518075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 13:10:16.243435  518075 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:10:16.244697  518075 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 13:10:16.245916  518075 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 13:10:16.247175  518075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 13:10:16.248838  518075 config.go:182] Loaded profile config "pause-752920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 13:10:16.249352  518075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:16.249418  518075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:16.266687  518075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0812 13:10:16.267232  518075 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:16.267924  518075 main.go:141] libmachine: Using API Version  1
	I0812 13:10:16.267951  518075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:16.268448  518075 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:16.268685  518075 main.go:141] libmachine: (pause-752920) Calling .DriverName
	I0812 13:10:16.268994  518075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 13:10:16.269336  518075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:16.269378  518075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:16.284975  518075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39843
	I0812 13:10:16.285430  518075 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:16.285936  518075 main.go:141] libmachine: Using API Version  1
	I0812 13:10:16.285960  518075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:16.286374  518075 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:16.286618  518075 main.go:141] libmachine: (pause-752920) Calling .DriverName
	I0812 13:10:16.326298  518075 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 13:10:16.327542  518075 start.go:297] selected driver: kvm2
	I0812 13:10:16.327568  518075 start.go:901] validating driver "kvm2" against &{Name:pause-752920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-752920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:10:16.327839  518075 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 13:10:16.328324  518075 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:10:16.328428  518075 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 13:10:16.345786  518075 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 13:10:16.346877  518075 cni.go:84] Creating CNI manager for ""
	I0812 13:10:16.346902  518075 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:10:16.346991  518075 start.go:340] cluster config:
	{Name:pause-752920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-752920 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.59 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 13:10:16.347187  518075 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 13:10:16.348910  518075 out.go:177] * Starting "pause-752920" primary control-plane node in "pause-752920" cluster
	I0812 13:10:12.777974  517985 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 13:10:12.778028  517985 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 13:10:12.778038  517985 cache.go:56] Caching tarball of preloaded images
	I0812 13:10:12.778142  517985 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 13:10:12.778151  517985 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 13:10:12.778285  517985 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/cert-expiration-993047/config.json ...
	I0812 13:10:12.778550  517985 start.go:360] acquireMachinesLock for cert-expiration-993047: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 13:10:16.070391  517549 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020179652s)
	I0812 13:10:16.070436  517549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:10:16.356934  517549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:10:16.433492  517549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:10:16.548226  517549 api_server.go:52] waiting for apiserver process to appear ...
	I0812 13:10:16.548333  517549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:10:17.048983  517549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:10:17.548808  517549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:10:17.566082  517549 api_server.go:72] duration metric: took 1.017855266s to wait for apiserver process to appear ...
	I0812 13:10:17.566120  517549 api_server.go:88] waiting for apiserver healthz status ...
	I0812 13:10:17.566165  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:20.373142  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 13:10:20.373182  517549 api_server.go:103] status: https://192.168.50.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 13:10:20.373202  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:20.472107  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0812 13:10:20.472150  517549 api_server.go:103] status: https://192.168.50.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0812 13:10:20.566294  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:16.820841  517900 main.go:141] libmachine: (auto-620755) DBG | domain auto-620755 has defined MAC address 52:54:00:87:ad:8f in network mk-auto-620755
	I0812 13:10:16.821389  517900 main.go:141] libmachine: (auto-620755) DBG | unable to find current IP address of domain auto-620755 in network mk-auto-620755
	I0812 13:10:16.821418  517900 main.go:141] libmachine: (auto-620755) DBG | I0812 13:10:16.821315  517924 retry.go:31] will retry after 1.032768752s: waiting for machine to come up
	I0812 13:10:17.856269  517900 main.go:141] libmachine: (auto-620755) DBG | domain auto-620755 has defined MAC address 52:54:00:87:ad:8f in network mk-auto-620755
	I0812 13:10:17.856792  517900 main.go:141] libmachine: (auto-620755) DBG | unable to find current IP address of domain auto-620755 in network mk-auto-620755
	I0812 13:10:17.856820  517900 main.go:141] libmachine: (auto-620755) DBG | I0812 13:10:17.856738  517924 retry.go:31] will retry after 1.652273038s: waiting for machine to come up
	I0812 13:10:19.511218  517900 main.go:141] libmachine: (auto-620755) DBG | domain auto-620755 has defined MAC address 52:54:00:87:ad:8f in network mk-auto-620755
	I0812 13:10:19.511756  517900 main.go:141] libmachine: (auto-620755) DBG | unable to find current IP address of domain auto-620755 in network mk-auto-620755
	I0812 13:10:19.511791  517900 main.go:141] libmachine: (auto-620755) DBG | I0812 13:10:19.511715  517924 retry.go:31] will retry after 1.881839705s: waiting for machine to come up
	I0812 13:10:16.350097  518075 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 13:10:16.350139  518075 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 13:10:16.350149  518075 cache.go:56] Caching tarball of preloaded images
	I0812 13:10:16.350237  518075 preload.go:172] Found /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 13:10:16.350251  518075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0812 13:10:16.350413  518075 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/pause-752920/config.json ...
	I0812 13:10:16.350649  518075 start.go:360] acquireMachinesLock for pause-752920: {Name:mkd847f02622328f4ac3a477e09ad4715e912385 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0812 13:10:20.614595  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:10:20.614652  517549 api_server.go:103] status: https://192.168.50.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:10:21.066453  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:21.073645  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:10:21.073705  517549 api_server.go:103] status: https://192.168.50.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:10:21.566263  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:21.572900  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0812 13:10:21.572933  517549 api_server.go:103] status: https://192.168.50.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0812 13:10:22.066430  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:22.072856  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 200:
	ok
	I0812 13:10:22.079748  517549 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 13:10:22.079786  517549 api_server.go:131] duration metric: took 4.513659081s to wait for apiserver health ...
	I0812 13:10:22.079799  517549 cni.go:84] Creating CNI manager for ""
	I0812 13:10:22.079808  517549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 13:10:22.081840  517549 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0812 13:10:22.083281  517549 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0812 13:10:22.095957  517549 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0812 13:10:22.120071  517549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 13:10:22.130419  517549 system_pods.go:59] 8 kube-system pods found
	I0812 13:10:22.130488  517549 system_pods.go:61] "coredns-6f6b679f8f-hhrt7" [0bca73f7-320f-48b7-b7a6-d90dca43cac8] Running
	I0812 13:10:22.130497  517549 system_pods.go:61] "coredns-6f6b679f8f-sprrc" [d16aa800-3a6a-4588-b708-ccfee84d3027] Running
	I0812 13:10:22.130507  517549 system_pods.go:61] "etcd-kubernetes-upgrade-399526" [6c371007-0778-40bc-b72c-728f9593650b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0812 13:10:22.130519  517549 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-399526" [d72178e3-c5ad-40ed-8792-f8b6592d6763] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 13:10:22.130532  517549 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-399526" [8a121d50-aa4e-494d-8d4f-e8269e5ae96d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 13:10:22.130539  517549 system_pods.go:61] "kube-proxy-j5f5s" [a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0] Running
	I0812 13:10:22.130552  517549 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-399526" [0bb20b1d-8466-4777-9299-ae0cb25cd8e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0812 13:10:22.130563  517549 system_pods.go:61] "storage-provisioner" [68ee19fe-7e09-431e-8b0c-e7cac69837ad] Running
	I0812 13:10:22.130572  517549 system_pods.go:74] duration metric: took 10.47366ms to wait for pod list to return data ...
	I0812 13:10:22.130583  517549 node_conditions.go:102] verifying NodePressure condition ...
	I0812 13:10:22.134943  517549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 13:10:22.134979  517549 node_conditions.go:123] node cpu capacity is 2
	I0812 13:10:22.134993  517549 node_conditions.go:105] duration metric: took 4.404984ms to run NodePressure ...
	I0812 13:10:22.135015  517549 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0812 13:10:22.482056  517549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 13:10:22.495779  517549 ops.go:34] apiserver oom_adj: -16
	I0812 13:10:22.495811  517549 kubeadm.go:597] duration metric: took 18.484589679s to restartPrimaryControlPlane
	I0812 13:10:22.495825  517549 kubeadm.go:394] duration metric: took 18.827099182s to StartCluster
	I0812 13:10:22.495850  517549 settings.go:142] acquiring lock: {Name:mke9ed38a916e17fe99baccde568c442d70df1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:10:22.495950  517549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 13:10:22.497363  517549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/kubeconfig: {Name:mk4f205db2bcce10f36c78768db1f6bbce48b12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 13:10:22.497635  517549 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.194 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0812 13:10:22.497767  517549 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0812 13:10:22.497838  517549 config.go:182] Loaded profile config "kubernetes-upgrade-399526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0812 13:10:22.497850  517549 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-399526"
	I0812 13:10:22.497881  517549 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-399526"
	W0812 13:10:22.497904  517549 addons.go:243] addon storage-provisioner should already be in state true
	I0812 13:10:22.497904  517549 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-399526"
	I0812 13:10:22.497928  517549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-399526"
	I0812 13:10:22.497933  517549 host.go:66] Checking if "kubernetes-upgrade-399526" exists ...
	I0812 13:10:22.498240  517549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:22.498282  517549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:22.498408  517549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:22.498452  517549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:22.499366  517549 out.go:177] * Verifying Kubernetes components...
	I0812 13:10:22.500737  517549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0812 13:10:22.519836  517549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0812 13:10:22.519849  517549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0812 13:10:22.520416  517549 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:22.520620  517549 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:22.520958  517549 main.go:141] libmachine: Using API Version  1
	I0812 13:10:22.520979  517549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:22.521145  517549 main.go:141] libmachine: Using API Version  1
	I0812 13:10:22.521161  517549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:22.521458  517549 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:22.521531  517549 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:22.522092  517549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:22.522143  517549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:22.522652  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetState
	I0812 13:10:22.526290  517549 kapi.go:59] client config for kubernetes-upgrade-399526: &rest.Config{Host:"https://192.168.50.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.crt", KeyFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kubernetes-upgrade-399526/client.key", CAFile:"/home/jenkins/minikube-integration/19411-463103/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0812 13:10:22.526719  517549 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-399526"
	W0812 13:10:22.526735  517549 addons.go:243] addon default-storageclass should already be in state true
	I0812 13:10:22.526767  517549 host.go:66] Checking if "kubernetes-upgrade-399526" exists ...
	I0812 13:10:22.527128  517549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:22.527164  517549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:22.543782  517549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I0812 13:10:22.544375  517549 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:22.545114  517549 main.go:141] libmachine: Using API Version  1
	I0812 13:10:22.545137  517549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:22.545596  517549 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:22.545851  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetState
	I0812 13:10:22.548333  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:10:22.550562  517549 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 13:10:22.552138  517549 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 13:10:22.552160  517549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 13:10:22.552186  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:10:22.555198  517549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0812 13:10:22.555712  517549 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:22.556137  517549 main.go:141] libmachine: Using API Version  1
	I0812 13:10:22.556150  517549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:22.556436  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:10:22.556583  517549 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:22.557145  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:10:22.557171  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:09:14 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:10:22.557194  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:10:22.557371  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:10:22.557573  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:10:22.557644  517549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 13:10:22.557689  517549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 13:10:22.557771  517549 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:10:22.579455  517549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0812 13:10:22.580090  517549 main.go:141] libmachine: () Calling .GetVersion
	I0812 13:10:22.580727  517549 main.go:141] libmachine: Using API Version  1
	I0812 13:10:22.580761  517549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 13:10:22.581185  517549 main.go:141] libmachine: () Calling .GetMachineName
	I0812 13:10:22.581396  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetState
	I0812 13:10:22.583503  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .DriverName
	I0812 13:10:22.583801  517549 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 13:10:22.583823  517549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 13:10:22.583849  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHHostname
	I0812 13:10:22.587753  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:10:22.588378  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:73:2d", ip: ""} in network mk-kubernetes-upgrade-399526: {Iface:virbr2 ExpiryTime:2024-08-12 14:09:14 +0000 UTC Type:0 Mac:52:54:00:d0:73:2d Iaid: IPaddr:192.168.50.194 Prefix:24 Hostname:kubernetes-upgrade-399526 Clientid:01:52:54:00:d0:73:2d}
	I0812 13:10:22.588412  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | domain kubernetes-upgrade-399526 has defined IP address 192.168.50.194 and MAC address 52:54:00:d0:73:2d in network mk-kubernetes-upgrade-399526
	I0812 13:10:22.588655  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHPort
	I0812 13:10:22.588851  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHKeyPath
	I0812 13:10:22.589012  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .GetSSHUsername
	I0812 13:10:22.589169  517549 sshutil.go:53] new ssh client: &{IP:192.168.50.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/kubernetes-upgrade-399526/id_rsa Username:docker}
	I0812 13:10:22.757234  517549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0812 13:10:22.779946  517549 api_server.go:52] waiting for apiserver process to appear ...
	I0812 13:10:22.780037  517549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 13:10:22.802049  517549 api_server.go:72] duration metric: took 304.373928ms to wait for apiserver process to appear ...
	I0812 13:10:22.802091  517549 api_server.go:88] waiting for apiserver healthz status ...
	I0812 13:10:22.802116  517549 api_server.go:253] Checking apiserver healthz at https://192.168.50.194:8443/healthz ...
	I0812 13:10:22.809131  517549 api_server.go:279] https://192.168.50.194:8443/healthz returned 200:
	ok
	I0812 13:10:22.810677  517549 api_server.go:141] control plane version: v1.31.0-rc.0
	I0812 13:10:22.810700  517549 api_server.go:131] duration metric: took 8.602171ms to wait for apiserver health ...
	I0812 13:10:22.810709  517549 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 13:10:22.817004  517549 system_pods.go:59] 8 kube-system pods found
	I0812 13:10:22.817039  517549 system_pods.go:61] "coredns-6f6b679f8f-hhrt7" [0bca73f7-320f-48b7-b7a6-d90dca43cac8] Running
	I0812 13:10:22.817046  517549 system_pods.go:61] "coredns-6f6b679f8f-sprrc" [d16aa800-3a6a-4588-b708-ccfee84d3027] Running
	I0812 13:10:22.817055  517549 system_pods.go:61] "etcd-kubernetes-upgrade-399526" [6c371007-0778-40bc-b72c-728f9593650b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0812 13:10:22.817065  517549 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-399526" [d72178e3-c5ad-40ed-8792-f8b6592d6763] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0812 13:10:22.817076  517549 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-399526" [8a121d50-aa4e-494d-8d4f-e8269e5ae96d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0812 13:10:22.817094  517549 system_pods.go:61] "kube-proxy-j5f5s" [a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0] Running
	I0812 13:10:22.817104  517549 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-399526" [0bb20b1d-8466-4777-9299-ae0cb25cd8e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0812 13:10:22.817110  517549 system_pods.go:61] "storage-provisioner" [68ee19fe-7e09-431e-8b0c-e7cac69837ad] Running
	I0812 13:10:22.817119  517549 system_pods.go:74] duration metric: took 6.402958ms to wait for pod list to return data ...
	I0812 13:10:22.817136  517549 kubeadm.go:582] duration metric: took 319.468024ms to wait for: map[apiserver:true system_pods:true]
	I0812 13:10:22.817153  517549 node_conditions.go:102] verifying NodePressure condition ...
	I0812 13:10:22.819859  517549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0812 13:10:22.819886  517549 node_conditions.go:123] node cpu capacity is 2
	I0812 13:10:22.819899  517549 node_conditions.go:105] duration metric: took 2.737317ms to run NodePressure ...
	I0812 13:10:22.819914  517549 start.go:241] waiting for startup goroutines ...
	I0812 13:10:22.926059  517549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 13:10:22.947478  517549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 13:10:23.705068  517549 main.go:141] libmachine: Making call to close driver server
	I0812 13:10:23.705123  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Close
	I0812 13:10:23.705168  517549 main.go:141] libmachine: Making call to close driver server
	I0812 13:10:23.705194  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Close
	I0812 13:10:23.705469  517549 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:10:23.705483  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Closing plugin on server side
	I0812 13:10:23.705489  517549 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:10:23.705499  517549 main.go:141] libmachine: Making call to close driver server
	I0812 13:10:23.705507  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Close
	I0812 13:10:23.705717  517549 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:10:23.705727  517549 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:10:23.705746  517549 main.go:141] libmachine: Making call to close driver server
	I0812 13:10:23.705755  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Close
	I0812 13:10:23.705763  517549 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:10:23.705774  517549 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:10:23.707481  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) DBG | Closing plugin on server side
	I0812 13:10:23.707585  517549 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:10:23.707633  517549 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:10:23.732455  517549 main.go:141] libmachine: Making call to close driver server
	I0812 13:10:23.732485  517549 main.go:141] libmachine: (kubernetes-upgrade-399526) Calling .Close
	I0812 13:10:23.732803  517549 main.go:141] libmachine: Successfully made call to close driver server
	I0812 13:10:23.732823  517549 main.go:141] libmachine: Making call to close connection to plugin binary
	I0812 13:10:23.734919  517549 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0812 13:10:23.736343  517549 addons.go:510] duration metric: took 1.238581924s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0812 13:10:23.736390  517549 start.go:246] waiting for cluster config update ...
	I0812 13:10:23.736418  517549 start.go:255] writing updated cluster config ...
	I0812 13:10:23.736732  517549 ssh_runner.go:195] Run: rm -f paused
	I0812 13:10:23.793213  517549 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0812 13:10:23.795207  517549 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-399526" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.661942346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a5d4cf6-54f7-420f-8768-2a605c57306b name=/runtime.v1.RuntimeService/Version
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.663778355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06244af9-5251-49a6-bc75-0a4f3ae51e01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.664265276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723468224664241477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06244af9-5251-49a6-bc75-0a4f3ae51e01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.664740519Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7b2593cc-0f43-485b-be88-5bd04ccb7a68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.664992080Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:55d905b3c26277edf55768695a93ec2ae6a191a873e5fcdec400868af6fcbaf0,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-sprrc,Uid:d16aa800-3a6a-4588-b708-ccfee84d3027,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468203021612930,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.586234739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5057c67d0265222a5437fc3ec0d8e09489bced11577f445f2eaf4e45c47015b8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-hhrt7,Uid:0bca73f7-320f-48b7-b7a6-d90dca43cac8,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202997878257,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.575511391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce3a3d223b1e3b0ace739318ebe202b9ea4cc23a6b4d5785ade77d5ad185092f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:68ee19fe-7e09-431e-8b0c-e7cac69837ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202642547217,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T13:09:42.924484372Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f6ebd03a8872517ca5c44b33e923d051da059701515041064b876a48c4f4f26,Metadata:&PodSandboxMetadata{Name:kube-proxy-j5f5s,Uid:a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202637904140,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.682180229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-399526,Uid:2112bd3608e81a6084ba485c6bbb5657,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202563844276,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 2112bd3608e81a6084ba485c6bbb5657,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2112bd3608e81a6084ba485c6bbb5657,kubernetes.io/config.seen: 2024-08-12T13:09:31.220805726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-399526,Uid:4a2fc405ff1e6397c1d090d967148c31,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202490109409,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.194:8443,kubernetes.io/config.hash: 4a2fc405ff1e6397c1d090d967148c31,kubernetes.io/config.seen: 2024-08-12T13:09
:31.220798712Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-399526,Uid:3fcb2eeb6a61b25dbcbf686a7202dea8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202474458578,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.194:2379,kubernetes.io/config.hash: 3fcb2eeb6a61b25dbcbf686a7202dea8,kubernetes.io/config.seen: 2024-08-12T13:09:31.268436853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-399526,Uid:0
a1934693738a8399dee5ff86e2d0365,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723468202469655838,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0a1934693738a8399dee5ff86e2d0365,kubernetes.io/config.seen: 2024-08-12T13:09:31.220807646Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6dafac038cd8f5d8690e4b611d2d95ebc19ee520219381612057c689985d846,Metadata:&PodSandboxMetadata{Name:kube-proxy-j5f5s,Uid:a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723468184014513626,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.682180229Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b13e05c61c050ee7c5d311eca6fb4ab22b23c5c087e35e2e21104c3f1ed3d91e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-sprrc,Uid:d16aa800-3a6a-4588-b708-ccfee84d3027,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723468183901861224,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.586234739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90b75dd1b44143030bc4ec5d928f39b8792400d2bdf905a34c5962fb6db8abe8,Metadata:&PodSandboxMetadat
a{Name:coredns-6f6b679f8f-hhrt7,Uid:0bca73f7-320f-48b7-b7a6-d90dca43cac8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723468183884697686,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-12T13:09:43.575511391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e42b6ebd6219f763644a7329091355536bd25fbec6d9363edc8507459623803,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:68ee19fe-7e09-431e-8b0c-e7cac69837ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723468183838446781,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-12T13:09:42.924484372Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7b2593cc-0f43-485b-be88-5bd04ccb7a68
name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.665968186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b0a3a3b-5b98-4e1b-8eed-e8ec08b7c165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.666079872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b0a3a3b-5b98-4e1b-8eed-e8ec08b7c165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.666433399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b80ce301aaae1a1fb809437df03fe9dc37db2f4de4945ad46851b54e0d325f6,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723468217019349403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1860c22afbe06c01fafdca14d92608bfc1678d0d38a5c7674b78dc94969e4e7c,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723468216994799566,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c24d872a4f460f0b234d86ebfcb2f99c1727cd06a352884f898c6e7363fe7fc,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723468217005890679,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6156a4bb87c49342cdc1717d828d2196f6b4ee81c48c95b7be2c3e7f4ce18aef,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723468216981504838,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4184067a019f48d6828ea7f3db544a2629806202932d7acfb6ceb24873b240e4,PodSandboxId:ce3a3d223b1e3b0ace739318ebe202b9ea4cc23a6b4d5785ade77d5ad185092f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723468204328435646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2989bbc7b93963e50c12a94841febbff67796c4731d58d733bf87e32e56c489,PodSandboxId:55d905b3c26277edf55768695a93ec2ae6a191a873e5fcdec400868af6fcbaf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204415764201,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76098050318bb0190ae6b5af48880ed9a06fcd95db6e9ca4b0f03e9f3add840a,PodSandboxId:5057c67d0265222a5437fc3ec0d8e09489bced11577f445f2eaf4e45c47015b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204272531223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48
b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460621584a49f59f25fd0edb0ed8d8c4340c48ad6b7a9b76719f44b1fdcf9fd0,PodSandboxId:1f6ebd03a8872517ca5c44b33e923d051da059701515041064b876a48c4f4f26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:17234682039983
30541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30f43cfbc512dac86c13c72dc9dd66a3a9fcad74f4de33105c329a363b061ef,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723468203017604838,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb218b4c543dcc4b683508fb430b917eb53a13d1023dc9353fca237565cd0256,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723468202943291575,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b93a6654ce34966875ca353fe539d961ef8f78e20be6a1cbcb573a5a0eda460,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723468202785742340,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3851052528b03c4b87bd3b0b4d97a004aa3d7e2d2f80c13720e5d91efb0dd1c,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723468202850249417,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bf8900d2935a862d13ee860924f962d5dc54db76088f268774554cac7b3,PodSandboxId:90b75dd1b44143030bc4ec5d928f39b8792400d2bdf905a34c5962fb6db8abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184528791250,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09506b6b244fed376e35dd3b9778fa5b0a1b95f87bb0ca7b6b2ee8b645108df,PodSandboxId:b13e05c61c050ee7c5d311eca6fb4ab22b23c5c087e35e2e21104c3f1ed3d91e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184560003895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f0b87e35afd881b9858bc8fc429db9f65295664923c32a7a9a7bdeefe714ce,PodSandboxId:c6dafac038cd8f5d8690e4b611d2d95ebc19ee520219381612057c689985d846,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723468184247722484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c245096f3979ea98931464843609390c41d2f3f5a6902cacf3d77a85d848e4,PodSandboxId:1e42b6ebd6219f763644a7329091355536bd25fbec6d9363edc8507459623803,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723468184023902556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b0a3a3b-5b98-4e1b-8eed-e8ec08b7c165 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.673749805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5595d16-c989-42f7-b025-26cd0d4a906f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.673854133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5595d16-c989-42f7-b025-26cd0d4a906f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.674284585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b80ce301aaae1a1fb809437df03fe9dc37db2f4de4945ad46851b54e0d325f6,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723468217019349403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1860c22afbe06c01fafdca14d92608bfc1678d0d38a5c7674b78dc94969e4e7c,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723468216994799566,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c24d872a4f460f0b234d86ebfcb2f99c1727cd06a352884f898c6e7363fe7fc,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723468217005890679,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6156a4bb87c49342cdc1717d828d2196f6b4ee81c48c95b7be2c3e7f4ce18aef,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723468216981504838,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4184067a019f48d6828ea7f3db544a2629806202932d7acfb6ceb24873b240e4,PodSandboxId:ce3a3d223b1e3b0ace739318ebe202b9ea4cc23a6b4d5785ade77d5ad185092f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723468204328435646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2989bbc7b93963e50c12a94841febbff67796c4731d58d733bf87e32e56c489,PodSandboxId:55d905b3c26277edf55768695a93ec2ae6a191a873e5fcdec400868af6fcbaf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204415764201,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76098050318bb0190ae6b5af48880ed9a06fcd95db6e9ca4b0f03e9f3add840a,PodSandboxId:5057c67d0265222a5437fc3ec0d8e09489bced11577f445f2eaf4e45c47015b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204272531223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48
b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460621584a49f59f25fd0edb0ed8d8c4340c48ad6b7a9b76719f44b1fdcf9fd0,PodSandboxId:1f6ebd03a8872517ca5c44b33e923d051da059701515041064b876a48c4f4f26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:17234682039983
30541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30f43cfbc512dac86c13c72dc9dd66a3a9fcad74f4de33105c329a363b061ef,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723468203017604838,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb218b4c543dcc4b683508fb430b917eb53a13d1023dc9353fca237565cd0256,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723468202943291575,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b93a6654ce34966875ca353fe539d961ef8f78e20be6a1cbcb573a5a0eda460,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723468202785742340,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3851052528b03c4b87bd3b0b4d97a004aa3d7e2d2f80c13720e5d91efb0dd1c,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723468202850249417,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bf8900d2935a862d13ee860924f962d5dc54db76088f268774554cac7b3,PodSandboxId:90b75dd1b44143030bc4ec5d928f39b8792400d2bdf905a34c5962fb6db8abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184528791250,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09506b6b244fed376e35dd3b9778fa5b0a1b95f87bb0ca7b6b2ee8b645108df,PodSandboxId:b13e05c61c050ee7c5d311eca6fb4ab22b23c5c087e35e2e21104c3f1ed3d91e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184560003895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f0b87e35afd881b9858bc8fc429db9f65295664923c32a7a9a7bdeefe714ce,PodSandboxId:c6dafac038cd8f5d8690e4b611d2d95ebc19ee520219381612057c689985d846,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723468184247722484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c245096f3979ea98931464843609390c41d2f3f5a6902cacf3d77a85d848e4,PodSandboxId:1e42b6ebd6219f763644a7329091355536bd25fbec6d9363edc8507459623803,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723468184023902556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5595d16-c989-42f7-b025-26cd0d4a906f name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.728180100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75bb55ee-460e-4da9-90f2-1bdfeb74315f name=/runtime.v1.RuntimeService/Version
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.728257091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75bb55ee-460e-4da9-90f2-1bdfeb74315f name=/runtime.v1.RuntimeService/Version
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.729579030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9b16faa-24ac-41ce-b507-dfc739367026 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.730454233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723468224730427525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9b16faa-24ac-41ce-b507-dfc739367026 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.731000444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32af252d-b979-4f79-a7c5-2299753bb0f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.731101589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32af252d-b979-4f79-a7c5-2299753bb0f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.731417387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b80ce301aaae1a1fb809437df03fe9dc37db2f4de4945ad46851b54e0d325f6,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723468217019349403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1860c22afbe06c01fafdca14d92608bfc1678d0d38a5c7674b78dc94969e4e7c,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723468216994799566,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c24d872a4f460f0b234d86ebfcb2f99c1727cd06a352884f898c6e7363fe7fc,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723468217005890679,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6156a4bb87c49342cdc1717d828d2196f6b4ee81c48c95b7be2c3e7f4ce18aef,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723468216981504838,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4184067a019f48d6828ea7f3db544a2629806202932d7acfb6ceb24873b240e4,PodSandboxId:ce3a3d223b1e3b0ace739318ebe202b9ea4cc23a6b4d5785ade77d5ad185092f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723468204328435646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2989bbc7b93963e50c12a94841febbff67796c4731d58d733bf87e32e56c489,PodSandboxId:55d905b3c26277edf55768695a93ec2ae6a191a873e5fcdec400868af6fcbaf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204415764201,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76098050318bb0190ae6b5af48880ed9a06fcd95db6e9ca4b0f03e9f3add840a,PodSandboxId:5057c67d0265222a5437fc3ec0d8e09489bced11577f445f2eaf4e45c47015b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204272531223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48
b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460621584a49f59f25fd0edb0ed8d8c4340c48ad6b7a9b76719f44b1fdcf9fd0,PodSandboxId:1f6ebd03a8872517ca5c44b33e923d051da059701515041064b876a48c4f4f26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:17234682039983
30541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30f43cfbc512dac86c13c72dc9dd66a3a9fcad74f4de33105c329a363b061ef,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723468203017604838,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb218b4c543dcc4b683508fb430b917eb53a13d1023dc9353fca237565cd0256,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723468202943291575,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b93a6654ce34966875ca353fe539d961ef8f78e20be6a1cbcb573a5a0eda460,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723468202785742340,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3851052528b03c4b87bd3b0b4d97a004aa3d7e2d2f80c13720e5d91efb0dd1c,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723468202850249417,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bf8900d2935a862d13ee860924f962d5dc54db76088f268774554cac7b3,PodSandboxId:90b75dd1b44143030bc4ec5d928f39b8792400d2bdf905a34c5962fb6db8abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184528791250,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09506b6b244fed376e35dd3b9778fa5b0a1b95f87bb0ca7b6b2ee8b645108df,PodSandboxId:b13e05c61c050ee7c5d311eca6fb4ab22b23c5c087e35e2e21104c3f1ed3d91e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184560003895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f0b87e35afd881b9858bc8fc429db9f65295664923c32a7a9a7bdeefe714ce,PodSandboxId:c6dafac038cd8f5d8690e4b611d2d95ebc19ee520219381612057c689985d846,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723468184247722484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c245096f3979ea98931464843609390c41d2f3f5a6902cacf3d77a85d848e4,PodSandboxId:1e42b6ebd6219f763644a7329091355536bd25fbec6d9363edc8507459623803,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723468184023902556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32af252d-b979-4f79-a7c5-2299753bb0f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.768171361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f7455e1-aad7-4488-9b78-89a012666335 name=/runtime.v1.RuntimeService/Version
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.768269719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f7455e1-aad7-4488-9b78-89a012666335 name=/runtime.v1.RuntimeService/Version
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.769356943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a12c1468-9fd4-4bed-9de9-cf1affa7d3b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.769718617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723468224769695131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a12c1468-9fd4-4bed-9de9-cf1affa7d3b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.770293571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3a47ecd-4986-4793-a34b-87e41215f8b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.770344612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3a47ecd-4986-4793-a34b-87e41215f8b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 12 13:10:24 kubernetes-upgrade-399526 crio[2274]: time="2024-08-12 13:10:24.770972723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b80ce301aaae1a1fb809437df03fe9dc37db2f4de4945ad46851b54e0d325f6,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723468217019349403,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1860c22afbe06c01fafdca14d92608bfc1678d0d38a5c7674b78dc94969e4e7c,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723468216994799566,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c24d872a4f460f0b234d86ebfcb2f99c1727cd06a352884f898c6e7363fe7fc,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1723468217005890679,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6156a4bb87c49342cdc1717d828d2196f6b4ee81c48c95b7be2c3e7f4ce18aef,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723468216981504838,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4184067a019f48d6828ea7f3db544a2629806202932d7acfb6ceb24873b240e4,PodSandboxId:ce3a3d223b1e3b0ace739318ebe202b9ea4cc23a6b4d5785ade77d5ad185092f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723468204328435646,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2989bbc7b93963e50c12a94841febbff67796c4731d58d733bf87e32e56c489,PodSandboxId:55d905b3c26277edf55768695a93ec2ae6a191a873e5fcdec400868af6fcbaf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204415764201,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76098050318bb0190ae6b5af48880ed9a06fcd95db6e9ca4b0f03e9f3add840a,PodSandboxId:5057c67d0265222a5437fc3ec0d8e09489bced11577f445f2eaf4e45c47015b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723468204272531223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48
b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460621584a49f59f25fd0edb0ed8d8c4340c48ad6b7a9b76719f44b1fdcf9fd0,PodSandboxId:1f6ebd03a8872517ca5c44b33e923d051da059701515041064b876a48c4f4f26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:17234682039983
30541,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30f43cfbc512dac86c13c72dc9dd66a3a9fcad74f4de33105c329a363b061ef,PodSandboxId:a54e3acfa63cd629e9b90a486f92f67b76cccfe47a942f810bb0a254872e5b77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723468203017604838,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2112bd3608e81a6084ba485c6bbb5657,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb218b4c543dcc4b683508fb430b917eb53a13d1023dc9353fca237565cd0256,PodSandboxId:262fa3fc3df4a0a548cf5911d549488328408e550f8064ba582b13c432eb836a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723468202943291575,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a2fc405ff1e6397c1d090d967148c31,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b93a6654ce34966875ca353fe539d961ef8f78e20be6a1cbcb573a5a0eda460,PodSandboxId:2e268a551b1927523cd63ba95f073b97ea23ba4465c77558daeaa7f890ca93bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723468202785742340,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1934693738a8399dee5ff86e2d0365,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3851052528b03c4b87bd3b0b4d97a004aa3d7e2d2f80c13720e5d91efb0dd1c,PodSandboxId:656ad700f69fa061d679ea7032628317d798608a025b21de3ee3b8d2e44c08d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723468202850249417,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-399526,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fcb2eeb6a61b25dbcbf686a7202dea8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bf8900d2935a862d13ee860924f962d5dc54db76088f268774554cac7b3,PodSandboxId:90b75dd1b44143030bc4ec5d928f39b8792400d2bdf905a34c5962fb6db8abe8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184528791250,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-6f6b679f8f-hhrt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bca73f7-320f-48b7-b7a6-d90dca43cac8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09506b6b244fed376e35dd3b9778fa5b0a1b95f87bb0ca7b6b2ee8b645108df,PodSandboxId:b13e05c61c050ee7c5d311eca6fb4ab22b23c5c087e35e2e21104c3f1ed3d91e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723468184560003895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sprrc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16aa800-3a6a-4588-b708-ccfee84d3027,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f0b87e35afd881b9858bc8fc429db9f65295664923c32a7a9a7bdeefe714ce,PodSandboxId:c6dafac038cd8f5d8690e4b611d2d95ebc19ee520219381612057c689985d846,Metadata
:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723468184247722484,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j5f5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c245096f3979ea98931464843609390c41d2f3f5a6902cacf3d77a85d848e4,PodSandboxId:1e42b6ebd6219f763644a7329091355536bd25fbec6d9363edc8507459623803,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723468184023902556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68ee19fe-7e09-431e-8b0c-e7cac69837ad,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3a47ecd-4986-4793-a34b-87e41215f8b5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b80ce301aaae       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   7 seconds ago       Running             kube-scheduler            2                   2e268a551b192       kube-scheduler-kubernetes-upgrade-399526
	4c24d872a4f46       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            2                   262fa3fc3df4a       kube-apiserver-kubernetes-upgrade-399526
	1860c22afbe06       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   2                   a54e3acfa63cd       kube-controller-manager-kubernetes-upgrade-399526
	6156a4bb87c49       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   656ad700f69fa       etcd-kubernetes-upgrade-399526
	c2989bbc7b939       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   55d905b3c2627       coredns-6f6b679f8f-sprrc
	4184067a019f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 seconds ago      Running             storage-provisioner       1                   ce3a3d223b1e3       storage-provisioner
	76098050318bb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   5057c67d02652       coredns-6f6b679f8f-hhrt7
	460621584a49f       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   20 seconds ago      Running             kube-proxy                1                   1f6ebd03a8872       kube-proxy-j5f5s
	e30f43cfbc512       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   21 seconds ago      Exited              kube-controller-manager   1                   a54e3acfa63cd       kube-controller-manager-kubernetes-upgrade-399526
	bb218b4c543dc       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   21 seconds ago      Exited              kube-apiserver            1                   262fa3fc3df4a       kube-apiserver-kubernetes-upgrade-399526
	c3851052528b0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago      Exited              etcd                      1                   656ad700f69fa       etcd-kubernetes-upgrade-399526
	2b93a6654ce34       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   22 seconds ago      Exited              kube-scheduler            1                   2e268a551b192       kube-scheduler-kubernetes-upgrade-399526
	b09506b6b244f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago      Exited              coredns                   0                   b13e05c61c050       coredns-6f6b679f8f-sprrc
	868a6bf8900d2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago      Exited              coredns                   0                   90b75dd1b4414       coredns-6f6b679f8f-hhrt7
	a7f0b87e35afd       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   40 seconds ago      Exited              kube-proxy                0                   c6dafac038cd8       kube-proxy-j5f5s
	d5c245096f397       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   40 seconds ago      Exited              storage-provisioner       0                   1e42b6ebd6219       storage-provisioner
	
	
	==> coredns [76098050318bb0190ae6b5af48880ed9a06fcd95db6e9ca4b0f03e9f3add840a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [868a6bf8900d2935a862d13ee860924f962d5dc54db76088f268774554cac7b3] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b09506b6b244fed376e35dd3b9778fa5b0a1b95f87bb0ca7b6b2ee8b645108df] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2989bbc7b93963e50c12a94841febbff67796c4731d58d733bf87e32e56c489] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-399526
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-399526
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 12 Aug 2024 13:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-399526
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 12 Aug 2024 13:10:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 12 Aug 2024 13:10:20 +0000   Mon, 12 Aug 2024 13:09:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 12 Aug 2024 13:10:20 +0000   Mon, 12 Aug 2024 13:09:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 12 Aug 2024 13:10:20 +0000   Mon, 12 Aug 2024 13:09:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 12 Aug 2024 13:10:20 +0000   Mon, 12 Aug 2024 13:09:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.194
	  Hostname:    kubernetes-upgrade-399526
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 36d945f1c8a74d1cac86666f864fa11d
	  System UUID:                36d945f1-c8a7-4d1c-ac86-666f864fa11d
	  Boot ID:                    a57cbf0f-a229-480e-8228-146abb64db38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hhrt7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 coredns-6f6b679f8f-sprrc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 etcd-kubernetes-upgrade-399526                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         47s
	  kube-system                 kube-apiserver-kubernetes-upgrade-399526             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-399526    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-proxy-j5f5s                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-kubernetes-upgrade-399526             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node kubernetes-upgrade-399526 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node kubernetes-upgrade-399526 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)  kubelet          Node kubernetes-upgrade-399526 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           43s                node-controller  Node kubernetes-upgrade-399526 event: Registered Node kubernetes-upgrade-399526 in Controller
	  Normal  CIDRAssignmentFailed     43s                cidrAllocator    Node kubernetes-upgrade-399526 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-399526 event: Registered Node kubernetes-upgrade-399526 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.801759] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.058478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075048] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.222675] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.143863] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.333109] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +4.502677] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.068038] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.212356] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[  +8.618832] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.110408] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.157224] kauditd_printk_skb: 76 callbacks suppressed
	[ +11.711439] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.087957] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.088374] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.195590] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.150995] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.295447] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[Aug12 13:10] systemd-fstab-generator[2413]: Ignoring "noauto" option for root device
	[  +0.088436] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.659793] kauditd_printk_skb: 119 callbacks suppressed
	[  +1.422336] systemd-fstab-generator[3447]: Ignoring "noauto" option for root device
	[  +6.395817] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.131347] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [6156a4bb87c49342cdc1717d828d2196f6b4ee81c48c95b7be2c3e7f4ce18aef] <==
	{"level":"info","ts":"2024-08-12T13:10:17.350255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c switched to configuration voters=(5275871951286808684)"}
	{"level":"info","ts":"2024-08-12T13:10:17.351725Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"14a4d92e1229b4cf","local-member-id":"4937a99640e5b06c","added-peer-id":"4937a99640e5b06c","added-peer-peer-urls":["https://192.168.50.194:2380"]}
	{"level":"info","ts":"2024-08-12T13:10:17.351854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14a4d92e1229b4cf","local-member-id":"4937a99640e5b06c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T13:10:17.352792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-12T13:10:17.351554Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-12T13:10:17.355870Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4937a99640e5b06c","initial-advertise-peer-urls":["https://192.168.50.194:2380"],"listen-peer-urls":["https://192.168.50.194:2380"],"advertise-client-urls":["https://192.168.50.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-12T13:10:17.355929Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-12T13:10:17.351587Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.194:2380"}
	{"level":"info","ts":"2024-08-12T13:10:17.356002Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.194:2380"}
	{"level":"info","ts":"2024-08-12T13:10:19.113120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:19.113191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:19.113244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c received MsgPreVoteResp from 4937a99640e5b06c at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:19.113263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became candidate at term 4"}
	{"level":"info","ts":"2024-08-12T13:10:19.113273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c received MsgVoteResp from 4937a99640e5b06c at term 4"}
	{"level":"info","ts":"2024-08-12T13:10:19.113285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became leader at term 4"}
	{"level":"info","ts":"2024-08-12T13:10:19.113318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4937a99640e5b06c elected leader 4937a99640e5b06c at term 4"}
	{"level":"info","ts":"2024-08-12T13:10:19.119443Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4937a99640e5b06c","local-member-attributes":"{Name:kubernetes-upgrade-399526 ClientURLs:[https://192.168.50.194:2379]}","request-path":"/0/members/4937a99640e5b06c/attributes","cluster-id":"14a4d92e1229b4cf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T13:10:19.119458Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:10:19.119688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T13:10:19.119725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T13:10:19.119480Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:10:19.120730Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T13:10:19.120991Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T13:10:19.121690Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T13:10:19.122329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.194:2379"}
	
	
	==> etcd [c3851052528b03c4b87bd3b0b4d97a004aa3d7e2d2f80c13720e5d91efb0dd1c] <==
	{"level":"info","ts":"2024-08-12T13:10:05.605498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-12T13:10:05.605558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c received MsgPreVoteResp from 4937a99640e5b06c at term 2"}
	{"level":"info","ts":"2024-08-12T13:10:05.605594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became candidate at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:05.605619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c received MsgVoteResp from 4937a99640e5b06c at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:05.605645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4937a99640e5b06c became leader at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:05.605671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4937a99640e5b06c elected leader 4937a99640e5b06c at term 3"}
	{"level":"info","ts":"2024-08-12T13:10:05.607213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:10:05.607213Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4937a99640e5b06c","local-member-attributes":"{Name:kubernetes-upgrade-399526 ClientURLs:[https://192.168.50.194:2379]}","request-path":"/0/members/4937a99640e5b06c/attributes","cluster-id":"14a4d92e1229b4cf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-12T13:10:05.607586Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-12T13:10:05.607896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-12T13:10:05.607941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-12T13:10:05.608357Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T13:10:05.608694Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-12T13:10:05.609173Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.194:2379"}
	{"level":"info","ts":"2024-08-12T13:10:05.609877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-12T13:10:14.469908Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-12T13:10:14.469972Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-399526","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.194:2380"],"advertise-client-urls":["https://192.168.50.194:2379"]}
	{"level":"warn","ts":"2024-08-12T13:10:14.470121Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T13:10:14.470174Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T13:10:14.471982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.194:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-12T13:10:14.472237Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.194:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-12T13:10:14.472414Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4937a99640e5b06c","current-leader-member-id":"4937a99640e5b06c"}
	{"level":"info","ts":"2024-08-12T13:10:14.476919Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.194:2380"}
	{"level":"info","ts":"2024-08-12T13:10:14.477211Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.194:2380"}
	{"level":"info","ts":"2024-08-12T13:10:14.477240Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-399526","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.194:2380"],"advertise-client-urls":["https://192.168.50.194:2379"]}
	
	
	==> kernel <==
	 13:10:25 up 1 min,  0 users,  load average: 0.85, 0.24, 0.08
	Linux kubernetes-upgrade-399526 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c24d872a4f460f0b234d86ebfcb2f99c1727cd06a352884f898c6e7363fe7fc] <==
	I0812 13:10:20.543621       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0812 13:10:20.552740       1 shared_informer.go:320] Caches are synced for configmaps
	I0812 13:10:20.552932       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0812 13:10:20.584013       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0812 13:10:20.592368       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0812 13:10:20.592399       1 policy_source.go:224] refreshing policies
	I0812 13:10:20.593267       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0812 13:10:20.599692       1 controller.go:615] quota admission added evaluator for: endpoints
	I0812 13:10:20.614619       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0812 13:10:20.614673       1 aggregator.go:171] initial CRD sync complete...
	I0812 13:10:20.614681       1 autoregister_controller.go:144] Starting autoregister controller
	I0812 13:10:20.614687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0812 13:10:20.614692       1 cache.go:39] Caches are synced for autoregister controller
	I0812 13:10:20.642871       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0812 13:10:20.644266       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0812 13:10:20.644905       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0812 13:10:20.659293       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0812 13:10:21.348093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0812 13:10:21.702623       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.194]
	I0812 13:10:21.709071       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0812 13:10:22.260577       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0812 13:10:22.271760       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0812 13:10:22.316412       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0812 13:10:22.441375       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0812 13:10:22.459574       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [bb218b4c543dcc4b683508fb430b917eb53a13d1023dc9353fca237565cd0256] <==
	I0812 13:10:07.274471       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0812 13:10:07.274714       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0812 13:10:07.275072       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0812 13:10:07.278205       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0812 13:10:07.279867       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0812 13:10:07.283949       1 controller.go:157] Shutting down quota evaluator
	I0812 13:10:07.284468       1 controller.go:176] quota evaluator worker shutdown
	I0812 13:10:07.286371       1 controller.go:176] quota evaluator worker shutdown
	I0812 13:10:07.286501       1 controller.go:176] quota evaluator worker shutdown
	I0812 13:10:07.286988       1 controller.go:176] quota evaluator worker shutdown
	I0812 13:10:07.287100       1 controller.go:176] quota evaluator worker shutdown
	E0812 13:10:07.974786       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:07.974784       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:08.974344       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:08.974719       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0812 13:10:09.974882       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:09.975298       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:10.974770       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:10.975309       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:11.975086       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:11.975239       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:12.975285       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:12.975294       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:13.975371       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0812 13:10:13.975405       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [1860c22afbe06c01fafdca14d92608bfc1678d0d38a5c7674b78dc94969e4e7c] <==
	I0812 13:10:23.891566       1 shared_informer.go:320] Caches are synced for HPA
	I0812 13:10:23.891636       1 shared_informer.go:320] Caches are synced for stateful set
	I0812 13:10:23.891752       1 shared_informer.go:320] Caches are synced for persistent volume
	I0812 13:10:23.891945       1 shared_informer.go:320] Caches are synced for ephemeral
	I0812 13:10:23.900370       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0812 13:10:23.904001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.575551ms"
	I0812 13:10:23.904157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.603µs"
	I0812 13:10:23.912596       1 shared_informer.go:320] Caches are synced for deployment
	I0812 13:10:23.932180       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0812 13:10:23.941886       1 shared_informer.go:320] Caches are synced for PVC protection
	I0812 13:10:23.942194       1 shared_informer.go:320] Caches are synced for GC
	I0812 13:10:23.942210       1 shared_informer.go:320] Caches are synced for taint
	I0812 13:10:23.942895       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0812 13:10:23.943141       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-399526"
	I0812 13:10:23.943255       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0812 13:10:23.949719       1 shared_informer.go:320] Caches are synced for attach detach
	I0812 13:10:23.994835       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0812 13:10:23.994893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-399526"
	I0812 13:10:24.001289       1 shared_informer.go:320] Caches are synced for endpoint
	I0812 13:10:24.006267       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 13:10:24.027840       1 shared_informer.go:320] Caches are synced for resource quota
	I0812 13:10:24.041946       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0812 13:10:24.465297       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 13:10:24.491777       1 shared_informer.go:320] Caches are synced for garbage collector
	I0812 13:10:24.491806       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [e30f43cfbc512dac86c13c72dc9dd66a3a9fcad74f4de33105c329a363b061ef] <==
	
	
	==> kube-proxy [460621584a49f59f25fd0edb0ed8d8c4340c48ad6b7a9b76719f44b1fdcf9fd0] <==
	I0812 13:10:07.341440       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0812 13:10:07.341533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:07.341617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:07.341733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:07.341782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:07.341927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:07.341999       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.50.194:8443: connect: connection refused"
	E0812 13:10:07.342013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:08.449992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:08.450124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:08.649903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:08.649980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:08.687268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:08.687428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:10.459744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:10.459925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:11.367650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:11.367748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:11.635547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:11.635688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:16.302383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:16.302431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-399526&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	W0812 13:10:16.573001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.194:8443: connect: connection refused
	E0812 13:10:16.573110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.194:8443: connect: connection refused" logger="UnhandledError"
	I0812 13:10:20.641966       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a7f0b87e35afd881b9858bc8fc429db9f65295664923c32a7a9a7bdeefe714ce] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0812 13:09:44.676490       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0812 13:09:44.710960       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.194"]
	E0812 13:09:44.711088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0812 13:09:44.836443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0812 13:09:44.836549       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0812 13:09:44.836607       1 server_linux.go:169] "Using iptables Proxier"
	I0812 13:09:44.841118       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0812 13:09:44.841610       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0812 13:09:44.842191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 13:09:44.844912       1 config.go:197] "Starting service config controller"
	I0812 13:09:44.845304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0812 13:09:44.845371       1 config.go:104] "Starting endpoint slice config controller"
	I0812 13:09:44.845395       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0812 13:09:44.848596       1 config.go:326] "Starting node config controller"
	I0812 13:09:44.848803       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0812 13:09:44.946432       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0812 13:09:44.946504       1 shared_informer.go:320] Caches are synced for service config
	I0812 13:09:44.948863       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b80ce301aaae1a1fb809437df03fe9dc37db2f4de4945ad46851b54e0d325f6] <==
	W0812 13:10:20.557378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.557447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.557588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0812 13:10:20.557624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.557740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.557771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.557828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0812 13:10:20.557839       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W0812 13:10:20.568988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.569744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.569089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0812 13:10:20.569561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0812 13:10:20.570854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	W0812 13:10:20.570423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0812 13:10:20.570917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	W0812 13:10:20.570463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.570970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.570504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0812 13:10:20.570999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W0812 13:10:20.570568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.573488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W0812 13:10:20.570726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0812 13:10:20.573530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	E0812 13:10:20.570760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I0812 13:10:20.612314       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [2b93a6654ce34966875ca353fe539d961ef8f78e20be6a1cbcb573a5a0eda460] <==
	I0812 13:10:04.481896       1 serving.go:386] Generated self-signed cert in-memory
	W0812 13:10:07.042671       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0812 13:10:07.042728       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0812 13:10:07.042738       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0812 13:10:07.042748       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0812 13:10:07.140596       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0812 13:10:07.143100       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0812 13:10:07.150414       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0812 13:10:07.150529       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0812 13:10:07.151497       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0812 13:10:07.150561       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0812 13:10:07.253159       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0812 13:10:14.605061       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: W0812 13:10:20.470885    3454 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-399526" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-399526' and this object
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:20.470926    3454 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:kubernetes-upgrade-399526\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-399526' and this object" logger="UnhandledError"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.474550    3454 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.579623    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/68ee19fe-7e09-431e-8b0c-e7cac69837ad-tmp\") pod \"storage-provisioner\" (UID: \"68ee19fe-7e09-431e-8b0c-e7cac69837ad\") " pod="kube-system/storage-provisioner"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.579787    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-xtables-lock\") pod \"kube-proxy-j5f5s\" (UID: \"a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0\") " pod="kube-system/kube-proxy-j5f5s"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.579844    3454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-lib-modules\") pod \"kube-proxy-j5f5s\" (UID: \"a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0\") " pod="kube-system/kube-proxy-j5f5s"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.669704    3454 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-399526"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.669788    3454 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-399526"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.669817    3454 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: I0812 13:10:20.671408    3454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 12 13:10:20 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:20.671882    3454 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-399526\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-399526"
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.581649    3454 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.582716    3454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-kube-proxy podName:a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0 nodeName:}" failed. No retries permitted until 2024-08-12 13:10:22.082311791 +0000 UTC m=+5.733768420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-kube-proxy") pod "kube-proxy-j5f5s" (UID: "a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0") : failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631085    3454 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631303    3454 projected.go:194] Error preparing data for projected volume kube-api-access-nh582 for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631500    3454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ee19fe-7e09-431e-8b0c-e7cac69837ad-kube-api-access-nh582 podName:68ee19fe-7e09-431e-8b0c-e7cac69837ad nodeName:}" failed. No retries permitted until 2024-08-12 13:10:22.131476434 +0000 UTC m=+5.782933085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nh582" (UniqueName: "kubernetes.io/projected/68ee19fe-7e09-431e-8b0c-e7cac69837ad-kube-api-access-nh582") pod "storage-provisioner" (UID: "68ee19fe-7e09-431e-8b0c-e7cac69837ad") : failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631170    3454 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631643    3454 projected.go:194] Error preparing data for projected volume kube-api-access-24frr for pod kube-system/coredns-6f6b679f8f-hhrt7: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631776    3454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bca73f7-320f-48b7-b7a6-d90dca43cac8-kube-api-access-24frr podName:0bca73f7-320f-48b7-b7a6-d90dca43cac8 nodeName:}" failed. No retries permitted until 2024-08-12 13:10:22.131759521 +0000 UTC m=+5.783216173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-24frr" (UniqueName: "kubernetes.io/projected/0bca73f7-320f-48b7-b7a6-d90dca43cac8-kube-api-access-24frr") pod "coredns-6f6b679f8f-hhrt7" (UID: "0bca73f7-320f-48b7-b7a6-d90dca43cac8") : failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631828    3454 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631893    3454 projected.go:194] Error preparing data for projected volume kube-api-access-2hfsv for pod kube-system/coredns-6f6b679f8f-sprrc: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631851    3454 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.632002    3454 projected.go:194] Error preparing data for projected volume kube-api-access-jcgds for pod kube-system/kube-proxy-j5f5s: failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.631976    3454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d16aa800-3a6a-4588-b708-ccfee84d3027-kube-api-access-2hfsv podName:d16aa800-3a6a-4588-b708-ccfee84d3027 nodeName:}" failed. No retries permitted until 2024-08-12 13:10:22.131963084 +0000 UTC m=+5.783419735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2hfsv" (UniqueName: "kubernetes.io/projected/d16aa800-3a6a-4588-b708-ccfee84d3027-kube-api-access-2hfsv") pod "coredns-6f6b679f8f-sprrc" (UID: "d16aa800-3a6a-4588-b708-ccfee84d3027") : failed to sync configmap cache: timed out waiting for the condition
	Aug 12 13:10:21 kubernetes-upgrade-399526 kubelet[3454]: E0812 13:10:21.632220    3454 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-kube-api-access-jcgds podName:a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0 nodeName:}" failed. No retries permitted until 2024-08-12 13:10:22.132206265 +0000 UTC m=+5.783662903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jcgds" (UniqueName: "kubernetes.io/projected/a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0-kube-api-access-jcgds") pod "kube-proxy-j5f5s" (UID: "a1c3d3cd-98ab-4775-be3f-0d6061dd8bb0") : failed to sync configmap cache: timed out waiting for the condition
	
	
	==> storage-provisioner [4184067a019f48d6828ea7f3db544a2629806202932d7acfb6ceb24873b240e4] <==
	I0812 13:10:04.816877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 13:10:07.260735       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 13:10:07.260820       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0812 13:10:07.295602       1 leaderelection.go:329] error initially creating leader election record: Post "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 13:10:10.747992       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0812 13:10:15.006341       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0812 13:10:20.628005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 13:10:20.628729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-399526_c48c3f43-fced-4e69-b18a-eaede4a0cf37!
	I0812 13:10:20.628590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d306b78-10de-401c-9cf3-5d13ece03e08", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-399526_c48c3f43-fced-4e69-b18a-eaede4a0cf37 became leader
	I0812 13:10:20.729758       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-399526_c48c3f43-fced-4e69-b18a-eaede4a0cf37!
	
	
	==> storage-provisioner [d5c245096f3979ea98931464843609390c41d2f3f5a6902cacf3d77a85d848e4] <==
	I0812 13:09:44.232891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-399526 -n kubernetes-upgrade-399526
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-399526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-399526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-399526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-399526: (1.12815518s)
--- FAIL: TestKubernetesUpgrade (410.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7200.061s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-230239 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0812 13:20:28.361113  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/enable-default-cni-620755/client.crt: no such file or directory
E0812 13:20:33.105627  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/flannel-620755/client.crt: no such file or directory
E0812 13:20:33.598376  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/calico-620755/client.crt: no such file or directory
E0812 13:20:44.615621  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 13:20:57.034560  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/bridge-620755/client.crt: no such file or directory
E0812 13:20:57.824529  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/custom-flannel-620755/client.crt: no such file or directory
E0812 13:21:50.281435  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/enable-default-cni-620755/client.crt: no such file or directory
E0812 13:21:52.998959  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/auto-620755/client.crt: no such file or directory
E0812 13:21:55.026024  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/flannel-620755/client.crt: no such file or directory
E0812 13:22:07.667248  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 13:22:10.165262  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kindnet-620755/client.crt: no such file or directory
E0812 13:22:18.955119  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/bridge-620755/client.crt: no such file or directory
E0812 13:22:20.683810  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/auto-620755/client.crt: no such file or directory
E0812 13:22:37.852098  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/kindnet-620755/client.crt: no such file or directory
E0812 13:22:49.752792  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/calico-620755/client.crt: no such file or directory
E0812 13:23:13.980496  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/custom-flannel-620755/client.crt: no such file or directory
E0812 13:23:17.438914  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/calico-620755/client.crt: no such file or directory
E0812 13:23:41.665713  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/custom-flannel-620755/client.crt: no such file or directory
E0812 13:24:06.437756  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/enable-default-cni-620755/client.crt: no such file or directory
E0812 13:24:11.184106  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/flannel-620755/client.crt: no such file or directory
E0812 13:24:34.122653  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/enable-default-cni-620755/client.crt: no such file or directory
E0812 13:24:35.110394  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/bridge-620755/client.crt: no such file or directory
E0812 13:24:38.866921  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/flannel-620755/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (18m37s)
	TestNetworkPlugins/group (9m46s)
	TestStartStop (16m14s)
	TestStartStop/group/default-k8s-diff-port (9m46s)
	TestStartStop/group/default-k8s-diff-port/serial (9m46s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5m34s)
	TestStartStop/group/embed-certs (10m3s)
	TestStartStop/group/embed-certs/serial (10m3s)
	TestStartStop/group/embed-certs/serial/SecondStart (6m5s)
	TestStartStop/group/no-preload (10m15s)
	TestStartStop/group/no-preload/serial (10m15s)
	TestStartStop/group/no-preload/serial/SecondStart (5m28s)
	TestStartStop/group/old-k8s-version (11m6s)
	TestStartStop/group/old-k8s-version/serial (11m6s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (4m31s)

                                                
                                                
goroutine 3462 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006e11e0, 0xc0004f3bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0005202e8, {0x49d8120, 0x2b, 0x2b}, {0x26b7ad6?, 0xc000821b00?, 0x4a94ca0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0004f8be0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0004f8be0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000903880)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2696 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0x10?, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc001c076c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000095fd0?, 0x592e44?, 0xc001c155f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3035 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3034
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1723 [chan receive, 19 minutes]:
testing.(*T).Run(0xc001c06000, {0x265d089?, 0x55127c?}, 0xc001678f18)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001c06000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001c06000, 0x3140378)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 69 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 68
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2919 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2918
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2300 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00086c820, 0x3140598)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1760
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 413 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00028f2c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 385
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2647 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0009caa50, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001609560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009caa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001627810, {0x369bd60, 0xc001a3f710}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001627810, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2644
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2695 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0000004d0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013593e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000000500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000553450, {0x369bd60, 0xc001462f90}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000553450, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 685 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc00173a180, 0xc001338a80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 372
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3034 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001d42750, 0xc0000a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0xa0?, 0xc001d42750, 0xc001d42798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0x99b656?, 0xc00181cc00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x7f3065?, 0xc00011e8c0?, 0xc00155c8f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3030
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1860 [chan receive, 10 minutes]:
testing.(*testContext).waitParallel(0xc000559680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc001c06820, 0xc001678f18)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1723
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2458 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b9e6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3433 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b4a136018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d6fd40?, 0xc0014d0100?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d6fd40, {0xc0014d0100, 0x7f00, 0x7f00})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019ba8a0, {0xc0014d0100?, 0x21a4920?, 0xfe38?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0019e9c20, {0x369a7c0, 0xc000612c68})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc0019e9c20}, {0x369a7c0, 0xc000612c68}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0019ba8a0?, {0x369a900, 0xc0019e9c20})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0019ba8a0, {0x369a900, 0xc0019e9c20})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc0019e9c20}, {0x369a820, 0xc0019ba8a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0015f8100?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3431
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 245 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7f5b4a136cb0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001ad880)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001ad880)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000152ae0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000152ae0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00087c0f0, {0x36b2ce0, 0xc000152ae0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00087c0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00141d040?, 0xc00141d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 242
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3271 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013bae40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3307
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2466 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2449
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 872 [select, 74 minutes]:
net/http.(*persistConn).writeLoop(0xc001e1b560)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 869
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 424 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001453750, 0xc0007bbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0xa0?, 0xc001453750, 0xc001453798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0x6db57a?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014537d0?, 0x592e44?, 0xc00010fda0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 413
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3261 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001d40f50, 0xc001d40f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0xc0?, 0xc001d40f50, 0xc001d40f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0x99b656?, 0xc001549e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000225500?, 0xc001a52cc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2917 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0000755d0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001358f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000075600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b275b0, {0x369bd60, 0xc00184e0f0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b275b0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2933
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2933 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000075600, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2931
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3262 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3261
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2880 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2879
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 652 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001589380, 0xc001a53c20)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 651
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3360 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b4a136ea0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0013ba360?, 0xc00154829d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013ba360, {0xc00154829d, 0x563, 0x563})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000708100, {0xc00154829d?, 0xc001a93d30?, 0x22f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a5a360, {0x369a7c0, 0xc0006128e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc000a5a360}, {0x369a7c0, 0xc0006128e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000708100?, {0x369a900, 0xc000a5a360})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000708100, {0x369a900, 0xc000a5a360})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc000a5a360}, {0x369a820, 0xc000708100}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00010ff20?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3359
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 838 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a48f00, 0xc001d2e2a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 837
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2643 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001609680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2642
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2932 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001359020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2931
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2401 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2400
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 871 [select, 74 minutes]:
net/http.(*persistConn).readLoop(0xc001e1b560)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 869
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 412 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013bb620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 385
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2649 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2648
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 423 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00028f290, 0x22)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013bb500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00028f2c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00162f090, {0x369bd60, 0xc001636480}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00162f090, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 413
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2301 [chan receive, 11 minutes]:
testing.(*T).Run(0xc00086c9c0, {0x265e62f?, 0x0?}, 0xc0015f8680)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00086c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00086c9c0, 0xc001f18300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 425 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 424
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3434 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017f0a80, 0xc001d2ed20)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3431
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2648 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc000094f50, 0xc001470f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0x0?, 0xc000094f50, 0xc000094f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc001d34000?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x7f3065?, 0xc000288140?, 0xc0002b4210?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2644
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3359 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x823c5, 0xc000b1dab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001c80390)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001c80390)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001fe480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001fe480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000156d00, 0xc0001fe480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bfcb0, 0xc00002acb0}, 0xc000156d00, {0xc000610f30, 0x12}, {0x0?, 0xc001f70760?}, {0x551133?, 0x4a170f?}, {0xc000b45c00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000156d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000156d00, 0xc00050a000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3141
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3442 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x82551, 0xc001485ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001cc0630)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001cc0630)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00181c480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00181c480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000b76c0, 0xc00181c480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bfcb0, 0xc00002e1c0}, 0xc0000b76c0, {0xc001a94318, 0x11}, {0x0?, 0xc001f70760?}, {0x551133?, 0x4a170f?}, {0xc001888000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000b76c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000b76c0, 0xc000280180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3260 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0016bed50, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b3fec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016bed80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0015acd90, {0x369bd60, 0xc0019e9ce0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0015acd90, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3301
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2878 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009ca850, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001e13b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009caac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001627df0, {0x369bd60, 0xc00082f830}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001627df0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2875
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2874 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e13c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2449 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc00133ef50, 0xc00133ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0xa0?, 0xc00133ef50, 0xc00133ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc00141c680?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00133efd0?, 0x592e44?, 0xc001412000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3404 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c20600, 0xc001339800)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3401
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1760 [chan receive, 17 minutes]:
testing.(*T).Run(0xc00141c9c0, {0x265d089?, 0x551133?}, 0x3140598)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00141c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00141c9c0, 0x31403c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2713 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001359500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2709
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2400 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001452f50, 0xc001452f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0x60?, 0xc001452f50, 0xc001452f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc001c069c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000225200?, 0xc0015ea360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2382
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2714 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000000500, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2709
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2448 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001413ed0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b9e5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001413f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e45150, {0x369bd60, 0xc0019b45a0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e45150, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2382 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001412440, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2422
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2918 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001a8ef50, 0xc001a8ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0x60?, 0xc001a8ef50, 0xc001a8ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc00190ca20?, 0xc00158aa00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0013a1680?, 0xc001339560?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2933
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3141 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0000b64e0, {0x266a489?, 0x60400000004?}, 0xc00050a000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0000b64e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0000b64e0, 0xc0001ad980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2322
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3033 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009cb110, 0x11)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001e12a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009cb1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b26360, {0x369bd60, 0xc000a5aae0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b26360, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3030
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2875 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009caac0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2821
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2888 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0000b6680, {0x266a489?, 0x60400000004?}, 0xc001990280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0000b6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0000b6680, 0xc0015f8680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2301
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3443 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f5b4a136bb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001966120?, 0xc00154938b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001966120, {0xc00154938b, 0x475, 0x475})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00093c268, {0xc00154938b?, 0x5383e0?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013e6bd0, {0x369a7c0, 0xc000708368})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc0013e6bd0}, {0x369a7c0, 0xc000708368}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00093c268?, {0x369a900, 0xc0013e6bd0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00093c268, {0x369a900, 0xc0013e6bd0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc0013e6bd0}, {0x369a820, 0xc00093c268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000280180?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3442
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2977 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001c07040, {0x266a489?, 0x60400000004?}, 0xc000280180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001c07040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001c07040, 0xc00050a280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2304
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2644 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009caa80, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2642
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2381 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001358c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2422
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2399 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001412410, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001358b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001412440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001625450, {0x369bd60, 0xc0009a6ba0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001625450, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2382
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3431 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x826db, 0xc001472ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc000982a80)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc000982a80)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017f0a80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0017f0a80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001c07520, 0xc0017f0a80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bfcb0, 0xc00002afc0}, 0xc001c07520, {0xc0015beb10, 0x16}, {0x0?, 0xc001340760?}, {0x551133?, 0x4a170f?}, {0xc0013a1200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001c07520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001c07520, 0xc001990280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2888
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2879 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001345f50, 0xc001345f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0xe0?, 0xc001345f50, 0xc001345f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0xc001cc41a0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc002054600?, 0xc0015eb3e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2875
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3029 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e12cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3059
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3444 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b4a136ac0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001966240?, 0xc00151b596?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001966240, {0xc00151b596, 0x16a6a, 0x16a6a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00093c2a8, {0xc00151b596?, 0xd0a0d2222203d04?, 0x20000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013e6c00, {0x369a7c0, 0xc000708370})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc0013e6c00}, {0x369a7c0, 0xc000708370}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00093c2a8?, {0x369a900, 0xc0013e6c00})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00093c2a8, {0x369a900, 0xc0013e6c00})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc0013e6c00}, {0x369a820, 0xc00093c2a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x4920230a0d0a0d5d?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3442
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3030 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009cb1c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3059
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2302 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc000559680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00086cb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00086cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00086cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00086cb60, 0xc001f18340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2303 [chan receive, 10 minutes]:
testing.(*T).Run(0xc00086cea0, {0x265e62f?, 0x0?}, 0xc0001adc80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00086cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00086cea0, 0xc001f18380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2304 [chan receive, 10 minutes]:
testing.(*T).Run(0xc00086d040, {0x265e62f?, 0x0?}, 0xc00050a280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00086d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00086d040, 0xc001f183c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3153 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0000b7520, {0x266a489?, 0x60400000004?}, 0xc0015f8100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0000b7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0000b7520, 0xc0001adc80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2303
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2322 [chan receive, 10 minutes]:
testing.(*T).Run(0xc00086d380, {0x265e62f?, 0x0?}, 0xc0001ad980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00086d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00086d380, 0xc001f18480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2697 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2696
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3445 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc00181c480, 0xc001620b40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3442
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2459 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001413f00, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3300 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001e12120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3286
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3272 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009ca540, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3307
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3301 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016bed80, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3286
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3410 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fe480, 0xc0000613e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3359
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3401 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x82500, 0xc0007f2ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001894810)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001894810)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001c20600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001c20600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00086cd00, 0xc001c20600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bfcb0, 0xc00002af50}, 0xc00086cd00, {0xc0006b8be0, 0x1c}, {0x0?, 0xc000502760?}, {0x551133?, 0x4a170f?}, {0xc00028c600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00086cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00086cd00, 0xc0015f8100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3153
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3432 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f5b4a136110, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d6fc80?, 0xc000a7f2df?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d6fc80, {0xc000a7f2df, 0x521, 0x521})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019ba860, {0xc000a7f2df?, 0x21a4904?, 0x20a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0019e9980, {0x369a7c0, 0xc00093c660})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc0019e9980}, {0x369a7c0, 0xc00093c660}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0019ba860?, {0x369a900, 0xc0019e9980})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0019ba860, {0x369a900, 0xc0019e9980})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc0019e9980}, {0x369a820, 0xc0019ba860}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001990280?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3431
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3312 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0009ca510, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013bad20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009ca540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004af0f0, {0x369bd60, 0xc0013e60f0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004af0f0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3272
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3313 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bfe70, 0xc000060060}, 0xc001f72750, 0xc001f72798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bfe70, 0xc000060060}, 0x6f?, 0xc001f72750, 0xc001f72798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bfe70?, 0xc000060060?}, 0x99b656?, 0xc001d4fc80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001f727d0?, 0x592e44?, 0x656874202020230a?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3272
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3330 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3313
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3361 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b4a136208, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0013ba420?, 0xc00149dc06?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013ba420, {0xc00149dc06, 0xe3fa, 0xe3fa})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000708130, {0xc00149dc06?, 0x21a4920?, 0xfe13?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a5a390, {0x369a7c0, 0xc0019ba320})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc000a5a390}, {0x369a7c0, 0xc0019ba320}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000708130?, {0x369a900, 0xc000a5a390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000708130, {0x369a900, 0xc000a5a390})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc000a5a390}, {0x369a820, 0xc000708130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00050a000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3359
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3402 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f5b4a136da8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0013598c0?, 0xc000a7eacb?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0013598c0, {0xc000a7eacb, 0x535, 0x535})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000612a58, {0xc000a7eacb?, 0x21a4920?, 0x215?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001462930, {0x369a7c0, 0xc0019ba710})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc001462930}, {0x369a7c0, 0xc0019ba710}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000612a58?, {0x369a900, 0xc001462930})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000612a58, {0x369a900, 0xc001462930})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc001462930}, {0x369a820, 0xc000612a58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0015f8100?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3401
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3403 [IO wait]:
internal/poll.runtime_pollWait(0x7f5b4a1369c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001359aa0?, 0xc000b73373?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001359aa0, {0xc000b73373, 0x4c8d, 0x4c8d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000612a70, {0xc000b73373?, 0x45fca9?, 0x10000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001462960, {0x369a7c0, 0xc000708348})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x369a900, 0xc001462960}, {0x369a7c0, 0xc000708348}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000612a70?, {0x369a900, 0xc001462960})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000612a70, {0x369a900, 0xc001462960})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x369a900, 0xc001462960}, {0x369a820, 0xc000612a70}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00050a500?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3401
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (176/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 51.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 14.57
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 50.22
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.59
31 TestOffline 65.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 68.13
38 TestCertExpiration 269.83
40 TestForceSystemdFlag 62.42
41 TestForceSystemdEnv 89.22
43 TestKVMDriverInstallOrUpdate 5.46
47 TestErrorSpam/setup 41.09
48 TestErrorSpam/start 0.36
49 TestErrorSpam/status 0.72
50 TestErrorSpam/pause 1.54
51 TestErrorSpam/unpause 1.6
52 TestErrorSpam/stop 4.58
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 100.27
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 35.98
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
64 TestFunctional/serial/CacheCmd/cache/add_local 2.3
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 33.59
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.36
75 TestFunctional/serial/LogsFileCmd 1.45
76 TestFunctional/serial/InvalidService 4.39
78 TestFunctional/parallel/ConfigCmd 0.34
79 TestFunctional/parallel/DashboardCmd 16.31
80 TestFunctional/parallel/DryRun 0.29
81 TestFunctional/parallel/InternationalLanguage 0.15
82 TestFunctional/parallel/StatusCmd 0.9
86 TestFunctional/parallel/ServiceCmdConnect 24.55
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 46.1
90 TestFunctional/parallel/SSHCmd 0.41
91 TestFunctional/parallel/CpCmd 1.31
92 TestFunctional/parallel/MySQL 23.74
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.36
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
102 TestFunctional/parallel/License 0.42
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.79
111 TestFunctional/parallel/ImageCommands/Setup 1.97
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.58
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.72
125 TestFunctional/parallel/ImageCommands/ImageRemove 1.02
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.03
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.61
128 TestFunctional/parallel/ServiceCmd/DeployApp 7.17
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
130 TestFunctional/parallel/ProfileCmd/profile_list 0.29
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
132 TestFunctional/parallel/MountCmd/any-port 8.51
133 TestFunctional/parallel/ServiceCmd/List 0.44
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
136 TestFunctional/parallel/ServiceCmd/Format 0.31
137 TestFunctional/parallel/ServiceCmd/URL 0.3
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 0.78
140 TestFunctional/parallel/MountCmd/specific-port 1.79
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
142 TestFunctional/delete_echo-server_images 0.04
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/StartCluster 271.44
149 TestMultiControlPlane/serial/DeployApp 7
150 TestMultiControlPlane/serial/PingHostFromPods 1.31
151 TestMultiControlPlane/serial/AddWorkerNode 59
152 TestMultiControlPlane/serial/NodeLabels 0.07
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
154 TestMultiControlPlane/serial/CopyFile 13.1
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
160 TestMultiControlPlane/serial/DeleteSecondaryNode 16.99
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
163 TestMultiControlPlane/serial/RestartCluster 379.54
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
165 TestMultiControlPlane/serial/AddSecondaryNode 78.89
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
170 TestJSONOutput/start/Command 97.1
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.69
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.64
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 7.36
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.2
198 TestMainNoArgs 0.05
199 TestMinikubeProfile 91.03
202 TestMountStart/serial/StartWithMountFirst 27.06
203 TestMountStart/serial/VerifyMountFirst 0.39
204 TestMountStart/serial/StartWithMountSecond 27.26
205 TestMountStart/serial/VerifyMountSecond 0.38
206 TestMountStart/serial/DeleteFirst 0.68
207 TestMountStart/serial/VerifyMountPostDelete 0.38
208 TestMountStart/serial/Stop 1.27
209 TestMountStart/serial/RestartStopped 22.98
210 TestMountStart/serial/VerifyMountPostStop 0.38
213 TestMultiNode/serial/FreshStart2Nodes 124.28
214 TestMultiNode/serial/DeployApp2Nodes 6.26
215 TestMultiNode/serial/PingHostFrom2Pods 0.8
216 TestMultiNode/serial/AddNode 53.12
217 TestMultiNode/serial/MultiNodeLabels 0.06
218 TestMultiNode/serial/ProfileList 0.22
219 TestMultiNode/serial/CopyFile 7.26
220 TestMultiNode/serial/StopNode 2.36
221 TestMultiNode/serial/StartAfterStop 40.39
223 TestMultiNode/serial/DeleteNode 2.46
225 TestMultiNode/serial/RestartMultiNode 178.41
226 TestMultiNode/serial/ValidateNameConflict 46.12
233 TestScheduledStopUnix 111.98
237 TestRunningBinaryUpgrade 208.69
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
243 TestNoKubernetes/serial/StartWithK8s 101.63
244 TestNoKubernetes/serial/StartWithStopK8s 43.56
245 TestNoKubernetes/serial/Start 28.15
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 30.96
259 TestNoKubernetes/serial/Stop 1.47
260 TestNoKubernetes/serial/StartNoArgs 22.72
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
262 TestStoppedBinaryUpgrade/Setup 2.64
263 TestStoppedBinaryUpgrade/Upgrade 118.36
272 TestPause/serial/Start 101.34
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
275 TestPause/serial/SecondStartNoReconfiguration 66.89
278 TestPause/serial/Pause 1.23
279 TestPause/serial/VerifyStatus 0.28
280 TestPause/serial/Unpause 0.71
281 TestPause/serial/PauseAgain 1
282 TestPause/serial/DeletePaused 1.06
283 TestPause/serial/VerifyDeletedResources 0.44
x
+
TestDownloadOnly/v1.20.0/json-events (51.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-719324 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-719324 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.198910354s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (51.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-719324
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-719324: exit status 85 (68.567452ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-719324 | jenkins | v1.33.1 | 12 Aug 24 11:24 UTC |          |
	|         | -p download-only-719324        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:24:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:24:49.140564  470387 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:24:49.140827  470387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:24:49.140836  470387 out.go:304] Setting ErrFile to fd 2...
	I0812 11:24:49.140841  470387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:24:49.141020  470387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	W0812 11:24:49.141177  470387 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19411-463103/.minikube/config/config.json: open /home/jenkins/minikube-integration/19411-463103/.minikube/config/config.json: no such file or directory
	I0812 11:24:49.141768  470387 out.go:298] Setting JSON to true
	I0812 11:24:49.142746  470387 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11220,"bootTime":1723450669,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:24:49.142811  470387 start.go:139] virtualization: kvm guest
	I0812 11:24:49.145755  470387 out.go:97] [download-only-719324] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0812 11:24:49.145882  470387 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball: no such file or directory
	I0812 11:24:49.145924  470387 notify.go:220] Checking for updates...
	I0812 11:24:49.147270  470387 out.go:169] MINIKUBE_LOCATION=19411
	I0812 11:24:49.148859  470387 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:24:49.150260  470387 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 11:24:49.151718  470387 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:24:49.153078  470387 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 11:24:49.155633  470387 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 11:24:49.155949  470387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:24:49.189509  470387 out.go:97] Using the kvm2 driver based on user configuration
	I0812 11:24:49.189555  470387 start.go:297] selected driver: kvm2
	I0812 11:24:49.189566  470387 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:24:49.189955  470387 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:24:49.190042  470387 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:24:49.206236  470387 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:24:49.206309  470387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:24:49.206852  470387 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 11:24:49.207002  470387 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 11:24:49.207074  470387 cni.go:84] Creating CNI manager for ""
	I0812 11:24:49.207089  470387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:24:49.207100  470387 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:24:49.207164  470387 start.go:340] cluster config:
	{Name:download-only-719324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-719324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:24:49.207345  470387 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:24:49.209257  470387 out.go:97] Downloading VM boot image ...
	I0812 11:24:49.209330  470387 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0812 11:25:00.034323  470387 out.go:97] Starting "download-only-719324" primary control-plane node in "download-only-719324" cluster
	I0812 11:25:00.034369  470387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:25:00.148976  470387 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:25:00.149074  470387 cache.go:56] Caching tarball of preloaded images
	I0812 11:25:00.149327  470387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:25:00.151545  470387 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0812 11:25:00.151581  470387 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:25:00.347595  470387 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:25:13.703919  470387 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:25:13.704024  470387 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:25:14.627784  470387 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0812 11:25:14.628260  470387 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/download-only-719324/config.json ...
	I0812 11:25:14.628304  470387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/download-only-719324/config.json: {Name:mkdb07aa69f836da826dddb1251285e96f89b1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:25:14.628511  470387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0812 11:25:14.628737  470387 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-719324 host does not exist
	  To start a cluster, run: "minikube start -p download-only-719324"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-719324
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-689476 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-689476 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.571630016s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-689476
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-689476: exit status 85 (62.288703ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-719324 | jenkins | v1.33.1 | 12 Aug 24 11:24 UTC |                     |
	|         | -p download-only-719324        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| delete  | -p download-only-719324        | download-only-719324 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| start   | -o=json --download-only        | download-only-689476 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC |                     |
	|         | -p download-only-689476        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:25:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:25:40.683510  470721 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:25:40.683645  470721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:25:40.683654  470721 out.go:304] Setting ErrFile to fd 2...
	I0812 11:25:40.683658  470721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:25:40.683844  470721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 11:25:40.684406  470721 out.go:298] Setting JSON to true
	I0812 11:25:40.685449  470721 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11272,"bootTime":1723450669,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:25:40.685514  470721 start.go:139] virtualization: kvm guest
	I0812 11:25:40.687927  470721 out.go:97] [download-only-689476] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:25:40.688123  470721 notify.go:220] Checking for updates...
	I0812 11:25:40.689594  470721 out.go:169] MINIKUBE_LOCATION=19411
	I0812 11:25:40.691153  470721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:25:40.692789  470721 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 11:25:40.694467  470721 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:25:40.695972  470721 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 11:25:40.698657  470721 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 11:25:40.698933  470721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:25:40.733411  470721 out.go:97] Using the kvm2 driver based on user configuration
	I0812 11:25:40.733450  470721 start.go:297] selected driver: kvm2
	I0812 11:25:40.733456  470721 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:25:40.733812  470721 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:25:40.733906  470721 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:25:40.749865  470721 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:25:40.749925  470721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:25:40.750455  470721 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 11:25:40.750614  470721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 11:25:40.750683  470721 cni.go:84] Creating CNI manager for ""
	I0812 11:25:40.750696  470721 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:25:40.750704  470721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:25:40.750762  470721 start.go:340] cluster config:
	{Name:download-only-689476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-689476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:25:40.750853  470721 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:25:40.752676  470721 out.go:97] Starting "download-only-689476" primary control-plane node in "download-only-689476" cluster
	I0812 11:25:40.752708  470721 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:25:41.343011  470721 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0812 11:25:41.343053  470721 cache.go:56] Caching tarball of preloaded images
	I0812 11:25:41.343239  470721 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0812 11:25:41.345671  470721 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0812 11:25:41.345708  470721 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:25:41.462046  470721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-689476 host does not exist
	  To start a cluster, run: "minikube start -p download-only-689476"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-689476
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (50.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-414261 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-414261 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (50.218651762s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (50.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-414261
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-414261: exit status 85 (65.646423ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-719324 | jenkins | v1.33.1 | 12 Aug 24 11:24 UTC |                     |
	|         | -p download-only-719324           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| delete  | -p download-only-719324           | download-only-719324 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| start   | -o=json --download-only           | download-only-689476 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC |                     |
	|         | -p download-only-689476           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| delete  | -p download-only-689476           | download-only-689476 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC | 12 Aug 24 11:25 UTC |
	| start   | -o=json --download-only           | download-only-414261 | jenkins | v1.33.1 | 12 Aug 24 11:25 UTC |                     |
	|         | -p download-only-414261           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/12 11:25:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 11:25:55.583844  470943 out.go:291] Setting OutFile to fd 1 ...
	I0812 11:25:55.583968  470943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:25:55.583977  470943 out.go:304] Setting ErrFile to fd 2...
	I0812 11:25:55.583981  470943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 11:25:55.584148  470943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 11:25:55.584691  470943 out.go:298] Setting JSON to true
	I0812 11:25:55.585649  470943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11287,"bootTime":1723450669,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 11:25:55.585714  470943 start.go:139] virtualization: kvm guest
	I0812 11:25:55.588055  470943 out.go:97] [download-only-414261] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 11:25:55.588213  470943 notify.go:220] Checking for updates...
	I0812 11:25:55.589570  470943 out.go:169] MINIKUBE_LOCATION=19411
	I0812 11:25:55.591337  470943 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 11:25:55.593035  470943 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 11:25:55.594373  470943 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 11:25:55.595804  470943 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0812 11:25:55.598485  470943 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0812 11:25:55.598755  470943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 11:25:55.631109  470943 out.go:97] Using the kvm2 driver based on user configuration
	I0812 11:25:55.631142  470943 start.go:297] selected driver: kvm2
	I0812 11:25:55.631149  470943 start.go:901] validating driver "kvm2" against <nil>
	I0812 11:25:55.631504  470943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:25:55.631595  470943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19411-463103/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0812 11:25:55.646602  470943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0812 11:25:55.646750  470943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0812 11:25:55.647258  470943 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0812 11:25:55.647412  470943 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 11:25:55.647490  470943 cni.go:84] Creating CNI manager for ""
	I0812 11:25:55.647507  470943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0812 11:25:55.647518  470943 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0812 11:25:55.647609  470943 start.go:340] cluster config:
	{Name:download-only-414261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-414261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 11:25:55.647730  470943 iso.go:125] acquiring lock: {Name:mkd1550a4abc655be3a31efe392211d8c160ee8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 11:25:55.649746  470943 out.go:97] Starting "download-only-414261" primary control-plane node in "download-only-414261" cluster
	I0812 11:25:55.649787  470943 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0812 11:25:55.761371  470943 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:25:55.761409  470943 cache.go:56] Caching tarball of preloaded images
	I0812 11:25:55.761594  470943 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0812 11:25:55.763787  470943 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0812 11:25:55.763823  470943 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:25:55.876376  470943 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 11:26:07.110804  470943 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:26:07.110929  470943 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19411-463103/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 11:26:07.857185  470943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0812 11:26:07.857549  470943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/download-only-414261/config.json ...
	I0812 11:26:07.857609  470943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/download-only-414261/config.json: {Name:mk00f0b38d8184862140e962ddd2279fe10e48e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 11:26:07.857798  470943 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0812 11:26:07.857929  470943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19411-463103/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-414261 host does not exist
	  To start a cluster, run: "minikube start -p download-only-414261"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-414261
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-345035 --alsologtostderr --binary-mirror http://127.0.0.1:43861 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-345035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-345035
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (65.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-379497 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-379497 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.810779336s)
helpers_test.go:175: Cleaning up "offline-crio-379497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-379497
--- PASS: TestOffline (65.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-800382
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-800382: exit status 85 (49.889741ms)

                                                
                                                
-- stdout --
	* Profile "addons-800382" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800382"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-800382
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-800382: exit status 85 (51.103108ms)

                                                
                                                
-- stdout --
	* Profile "addons-800382" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-800382"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (68.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-977658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-977658 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m6.863222533s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-977658 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-977658 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-977658 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-977658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-977658
--- PASS: TestCertOptions (68.13s)

                                                
                                    
x
+
TestCertExpiration (269.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993047 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993047 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.396370491s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993047 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993047 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.38502862s)
helpers_test.go:175: Cleaning up "cert-expiration-993047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-993047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-993047: (1.051698028s)
--- PASS: TestCertExpiration (269.83s)

                                                
                                    
x
+
TestForceSystemdFlag (62.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-914561 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-914561 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.188054976s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-914561 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-914561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-914561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-914561: (1.030699216s)
--- PASS: TestForceSystemdFlag (62.42s)

                                                
                                    
x
+
TestForceSystemdEnv (89.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-806608 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-806608 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m28.1120108s)
helpers_test.go:175: Cleaning up "force-systemd-env-806608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-806608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-806608: (1.11249267s)
--- PASS: TestForceSystemdEnv (89.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.46s)

                                                
                                    
x
+
TestErrorSpam/setup (41.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-350425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-350425 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-350425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-350425 --driver=kvm2  --container-runtime=crio: (41.092771091s)
--- PASS: TestErrorSpam/setup (41.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (4.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop: (1.628044638s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop: (1.522844872s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-350425 --log_dir /tmp/nospam-350425 stop: (1.430469192s)
--- PASS: TestErrorSpam/stop (4.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19411-463103/.minikube/files/etc/test/nested/copy/470375/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-719946 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m40.274103254s)
--- PASS: TestFunctional/serial/StartWithProxy (100.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-719946 --alsologtostderr -v=8: (35.978625433s)
functional_test.go:663: soft start took 35.979482287s for "functional-719946" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-719946 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:3.1: (1.098944547s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:3.3: (1.20584138s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 cache add registry.k8s.io/pause:latest: (1.126137751s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-719946 /tmp/TestFunctionalserialCacheCmdcacheadd_local2958889072/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache add minikube-local-cache-test:functional-719946
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 cache add minikube-local-cache-test:functional-719946: (1.953960739s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache delete minikube-local-cache-test:functional-719946
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-719946
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.929828ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 kubectl -- --context functional-719946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-719946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-719946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.588075844s)
functional_test.go:761: restart took 33.588226877s for "functional-719946" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-719946 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 logs: (1.357447023s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 logs --file /tmp/TestFunctionalserialLogsFileCmd2675541416/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 logs --file /tmp/TestFunctionalserialLogsFileCmd2675541416/001/logs.txt: (1.44895666s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-719946 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-719946
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-719946: exit status 115 (294.859432ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.119:32142 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-719946 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 config get cpus: exit status 14 (48.849281ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 config get cpus: exit status 14 (48.757424ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-719946 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-719946 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 484412: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-719946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.083524ms)

                                                
                                                
-- stdout --
	* [functional-719946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:11:11.589597  484132 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:11:11.589932  484132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:11.589944  484132 out.go:304] Setting ErrFile to fd 2...
	I0812 12:11:11.589950  484132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:11.590139  484132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:11:11.590717  484132 out.go:298] Setting JSON to false
	I0812 12:11:11.591852  484132 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14003,"bootTime":1723450669,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:11:11.591925  484132 start.go:139] virtualization: kvm guest
	I0812 12:11:11.594113  484132 out.go:177] * [functional-719946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0812 12:11:11.595440  484132 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:11:11.595462  484132 notify.go:220] Checking for updates...
	I0812 12:11:11.598033  484132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:11:11.599183  484132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:11:11.600254  484132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:11.601695  484132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:11:11.603028  484132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:11:11.604666  484132 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:11:11.605123  484132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:11.605198  484132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:11.621224  484132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
	I0812 12:11:11.621884  484132 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:11.622411  484132 main.go:141] libmachine: Using API Version  1
	I0812 12:11:11.622431  484132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:11.622740  484132 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:11.622961  484132 main.go:141] libmachine: (functional-719946) Calling .DriverName
	I0812 12:11:11.623255  484132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:11:11.623691  484132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:11.623758  484132 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:11.639651  484132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0812 12:11:11.640197  484132 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:11.640794  484132 main.go:141] libmachine: Using API Version  1
	I0812 12:11:11.640819  484132 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:11.641120  484132 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:11.641348  484132 main.go:141] libmachine: (functional-719946) Calling .DriverName
	I0812 12:11:11.677166  484132 out.go:177] * Using the kvm2 driver based on existing profile
	I0812 12:11:11.678418  484132 start.go:297] selected driver: kvm2
	I0812 12:11:11.678438  484132 start.go:901] validating driver "kvm2" against &{Name:functional-719946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-719946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.119 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:11:11.678596  484132 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:11:11.681051  484132 out.go:177] 
	W0812 12:11:11.682750  484132 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0812 12:11:11.684195  484132 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-719946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-719946 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.420029ms)

                                                
                                                
-- stdout --
	* [functional-719946] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:11:11.447857  484088 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:11:11.448126  484088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:11.448136  484088 out.go:304] Setting ErrFile to fd 2...
	I0812 12:11:11.448143  484088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:11:11.448452  484088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:11:11.448998  484088 out.go:298] Setting JSON to false
	I0812 12:11:11.450096  484088 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14002,"bootTime":1723450669,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0812 12:11:11.450165  484088 start.go:139] virtualization: kvm guest
	I0812 12:11:11.452434  484088 out.go:177] * [functional-719946] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0812 12:11:11.453705  484088 out.go:177]   - MINIKUBE_LOCATION=19411
	I0812 12:11:11.453730  484088 notify.go:220] Checking for updates...
	I0812 12:11:11.455838  484088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0812 12:11:11.457111  484088 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	I0812 12:11:11.458307  484088 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	I0812 12:11:11.459445  484088 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 12:11:11.460549  484088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0812 12:11:11.462185  484088 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:11:11.462862  484088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:11.462938  484088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:11.484844  484088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I0812 12:11:11.485300  484088 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:11.485874  484088 main.go:141] libmachine: Using API Version  1
	I0812 12:11:11.485895  484088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:11.486244  484088 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:11.486429  484088 main.go:141] libmachine: (functional-719946) Calling .DriverName
	I0812 12:11:11.486733  484088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0812 12:11:11.487010  484088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:11:11.487044  484088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:11:11.502970  484088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0812 12:11:11.503390  484088 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:11:11.503959  484088 main.go:141] libmachine: Using API Version  1
	I0812 12:11:11.503990  484088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:11:11.504292  484088 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:11:11.504486  484088 main.go:141] libmachine: (functional-719946) Calling .DriverName
	I0812 12:11:11.538207  484088 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0812 12:11:11.539464  484088 start.go:297] selected driver: kvm2
	I0812 12:11:11.539476  484088 start.go:901] validating driver "kvm2" against &{Name:functional-719946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-719946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.119 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0812 12:11:11.539623  484088 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0812 12:11:11.541496  484088 out.go:177] 
	W0812 12:11:11.542602  484088 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0812 12:11:11.543785  484088 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-719946 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-719946 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-dtmf8" [424647bd-1341-49be-bbf8-360194de2925] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-dtmf8" [424647bd-1341-49be-bbf8-360194de2925] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.004036665s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.119:31164
functional_test.go:1675: http://192.168.39.119:31164: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-dtmf8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.119:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.119:31164
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [24d51bfd-8c42-4a27-9404-2067796e141f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003947202s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-719946 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-719946 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-719946 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-719946 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-719946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5ae6bcfb-3f65-481e-a822-446db5956f1d] Pending
helpers_test.go:344: "sp-pod" [5ae6bcfb-3f65-481e-a822-446db5956f1d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5ae6bcfb-3f65-481e-a822-446db5956f1d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004887939s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-719946 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-719946 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-719946 delete -f testdata/storage-provisioner/pod.yaml: (1.172893296s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-719946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f64337af-d14e-40f2-a546-8184808f5de5] Pending
helpers_test.go:344: "sp-pod" [f64337af-d14e-40f2-a546-8184808f5de5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f64337af-d14e-40f2-a546-8184808f5de5] Running
2024/08/12 12:11:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004701917s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-719946 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh -n functional-719946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cp functional-719946:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3092307007/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh -n functional-719946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh -n functional-719946 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-719946 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8j6k2" [8762a7c1-ed17-4e58-be3f-0fcf023b8d1d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8j6k2" [8762a7c1-ed17-4e58-be3f-0fcf023b8d1d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.005796899s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-719946 exec mysql-64454c8b5c-8j6k2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-719946 exec mysql-64454c8b5c-8j6k2 -- mysql -ppassword -e "show databases;": exit status 1 (156.60504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-719946 exec mysql-64454c8b5c-8j6k2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-719946 exec mysql-64454c8b5c-8j6k2 -- mysql -ppassword -e "show databases;": exit status 1 (553.253888ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-719946 exec mysql-64454c8b5c-8j6k2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/470375/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /etc/test/nested/copy/470375/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/470375.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /etc/ssl/certs/470375.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/470375.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /usr/share/ca-certificates/470375.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4703752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /etc/ssl/certs/4703752.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4703752.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /usr/share/ca-certificates/4703752.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-719946 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "sudo systemctl is-active docker": exit status 1 (241.27203ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "sudo systemctl is-active containerd": exit status 1 (222.850282ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-719946 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-719946
localhost/kicbase/echo-server:functional-719946
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-719946 image ls --format short --alsologtostderr:
I0812 12:11:12.644421  484329 out.go:291] Setting OutFile to fd 1 ...
I0812 12:11:12.644724  484329 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:12.644733  484329 out.go:304] Setting ErrFile to fd 2...
I0812 12:11:12.644738  484329 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:12.644963  484329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
I0812 12:11:12.645598  484329 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:12.645693  484329 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:12.646090  484329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:12.646136  484329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:12.662082  484329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39729
I0812 12:11:12.662627  484329 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:12.663197  484329 main.go:141] libmachine: Using API Version  1
I0812 12:11:12.663226  484329 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:12.663604  484329 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:12.663820  484329 main.go:141] libmachine: (functional-719946) Calling .GetState
I0812 12:11:12.665669  484329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:12.665709  484329 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:12.681276  484329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
I0812 12:11:12.681797  484329 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:12.682382  484329 main.go:141] libmachine: Using API Version  1
I0812 12:11:12.682415  484329 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:12.682737  484329 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:12.682949  484329 main.go:141] libmachine: (functional-719946) Calling .DriverName
I0812 12:11:12.683135  484329 ssh_runner.go:195] Run: systemctl --version
I0812 12:11:12.683160  484329 main.go:141] libmachine: (functional-719946) Calling .GetSSHHostname
I0812 12:11:12.686355  484329 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:12.686798  484329 main.go:141] libmachine: (functional-719946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d3:09", ip: ""} in network mk-functional-719946: {Iface:virbr1 ExpiryTime:2024-08-12 13:07:53 +0000 UTC Type:0 Mac:52:54:00:cd:d3:09 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:functional-719946 Clientid:01:52:54:00:cd:d3:09}
I0812 12:11:12.686849  484329 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined IP address 192.168.39.119 and MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:12.687012  484329 main.go:141] libmachine: (functional-719946) Calling .GetSSHPort
I0812 12:11:12.687193  484329 main.go:141] libmachine: (functional-719946) Calling .GetSSHKeyPath
I0812 12:11:12.687374  484329 main.go:141] libmachine: (functional-719946) Calling .GetSSHUsername
I0812 12:11:12.687503  484329 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/functional-719946/id_rsa Username:docker}
I0812 12:11:12.785509  484329 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 12:11:12.856907  484329 main.go:141] libmachine: Making call to close driver server
I0812 12:11:12.856921  484329 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:12.857242  484329 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:12.857304  484329 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 12:11:12.857319  484329 main.go:141] libmachine: Making call to close driver server
I0812 12:11:12.857329  484329 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:12.857281  484329 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:12.857593  484329 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:12.857607  484329 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-719946 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-719946  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-719946  | 15d858f7fc111 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-719946  | b308035875219 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-719946 image ls --format table --alsologtostderr:
I0812 12:11:17.246247  484626 out.go:291] Setting OutFile to fd 1 ...
I0812 12:11:17.246573  484626 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:17.246585  484626 out.go:304] Setting ErrFile to fd 2...
I0812 12:11:17.246589  484626 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:17.246783  484626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
I0812 12:11:17.247355  484626 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:17.247456  484626 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:17.247824  484626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:17.247880  484626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:17.264312  484626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
I0812 12:11:17.264881  484626 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:17.265563  484626 main.go:141] libmachine: Using API Version  1
I0812 12:11:17.265596  484626 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:17.266001  484626 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:17.266265  484626 main.go:141] libmachine: (functional-719946) Calling .GetState
I0812 12:11:17.268395  484626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:17.268446  484626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:17.285938  484626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
I0812 12:11:17.286352  484626 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:17.286875  484626 main.go:141] libmachine: Using API Version  1
I0812 12:11:17.286903  484626 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:17.287334  484626 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:17.287574  484626 main.go:141] libmachine: (functional-719946) Calling .DriverName
I0812 12:11:17.287842  484626 ssh_runner.go:195] Run: systemctl --version
I0812 12:11:17.287877  484626 main.go:141] libmachine: (functional-719946) Calling .GetSSHHostname
I0812 12:11:17.291214  484626 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:17.291799  484626 main.go:141] libmachine: (functional-719946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d3:09", ip: ""} in network mk-functional-719946: {Iface:virbr1 ExpiryTime:2024-08-12 13:07:53 +0000 UTC Type:0 Mac:52:54:00:cd:d3:09 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:functional-719946 Clientid:01:52:54:00:cd:d3:09}
I0812 12:11:17.291833  484626 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined IP address 192.168.39.119 and MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:17.291837  484626 main.go:141] libmachine: (functional-719946) Calling .GetSSHPort
I0812 12:11:17.292019  484626 main.go:141] libmachine: (functional-719946) Calling .GetSSHKeyPath
I0812 12:11:17.292218  484626 main.go:141] libmachine: (functional-719946) Calling .GetSSHUsername
I0812 12:11:17.292394  484626 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/functional-719946/id_rsa Username:docker}
I0812 12:11:17.421146  484626 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 12:11:17.469878  484626 main.go:141] libmachine: Making call to close driver server
I0812 12:11:17.469901  484626 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:17.470234  484626 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:17.470258  484626 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 12:11:17.470274  484626 main.go:141] libmachine: Making call to close driver server
I0812 12:11:17.470272  484626 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:17.470287  484626 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:17.470518  484626 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:17.470525  484626 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:17.470539  484626 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-719946 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e6
64f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"2f9705207496ee76c7ef603944237b3816fababff71ef077e61c85122b19ba65","repoDigests":["docker.io/library/b2d2ff0b47baf5b0bf90bcbaf19fb14fda190bb18563cefab66f089242b58be0-tmp@sha256:989d2a99a5e07aa06fd2d4a7f4d3b85dd808a8d0259e88f604a797b53f8fb06d"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e
4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-719946"],"size":"4943877"},{"id":"b308035875219719219caabe870f24229e01e2f608e833dfff3b413332a1c6a6","repoDigests":["localhost/minikube-local-cache-test@sha256:2fd00a104f1b17c48fab975245d1d48a955143cd81b0dac88a7068df50673dca"],"repoTags":["localhost/minikube-local-cache-test:functional-719946"],"size":"3330"},{"id":"15d858f7fc111a711406c8845d4229ca1d055500c2454803bb3a59e6201c1e92","repoDigests":["localhost/my-image@sha256:12a222c6286624e4a91019388febef0b960c97c20cf935bfb3cbd161f97dc932"],"repoTags":["localhost/my-image:functional-719946"],"size":"1468599"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver
@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce6
7bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDige
sts":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"beae
173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-719946 image ls --format json --alsologtostderr:
I0812 12:11:16.972702  484573 out.go:291] Setting OutFile to fd 1 ...
I0812 12:11:16.972812  484573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:16.972817  484573 out.go:304] Setting ErrFile to fd 2...
I0812 12:11:16.972834  484573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:16.973059  484573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
I0812 12:11:16.973701  484573 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:16.973837  484573 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:16.974238  484573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:16.974296  484573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:16.990002  484573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
I0812 12:11:16.990501  484573 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:16.991096  484573 main.go:141] libmachine: Using API Version  1
I0812 12:11:16.991129  484573 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:16.991472  484573 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:16.991690  484573 main.go:141] libmachine: (functional-719946) Calling .GetState
I0812 12:11:16.993478  484573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:16.993526  484573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:17.010012  484573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
I0812 12:11:17.010487  484573 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:17.011053  484573 main.go:141] libmachine: Using API Version  1
I0812 12:11:17.011088  484573 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:17.011497  484573 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:17.011685  484573 main.go:141] libmachine: (functional-719946) Calling .DriverName
I0812 12:11:17.011932  484573 ssh_runner.go:195] Run: systemctl --version
I0812 12:11:17.011963  484573 main.go:141] libmachine: (functional-719946) Calling .GetSSHHostname
I0812 12:11:17.015296  484573 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:17.015744  484573 main.go:141] libmachine: (functional-719946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d3:09", ip: ""} in network mk-functional-719946: {Iface:virbr1 ExpiryTime:2024-08-12 13:07:53 +0000 UTC Type:0 Mac:52:54:00:cd:d3:09 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:functional-719946 Clientid:01:52:54:00:cd:d3:09}
I0812 12:11:17.015781  484573 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined IP address 192.168.39.119 and MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:17.015972  484573 main.go:141] libmachine: (functional-719946) Calling .GetSSHPort
I0812 12:11:17.016191  484573 main.go:141] libmachine: (functional-719946) Calling .GetSSHKeyPath
I0812 12:11:17.016380  484573 main.go:141] libmachine: (functional-719946) Calling .GetSSHUsername
I0812 12:11:17.016590  484573 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/functional-719946/id_rsa Username:docker}
I0812 12:11:17.123706  484573 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 12:11:17.185128  484573 main.go:141] libmachine: Making call to close driver server
I0812 12:11:17.185145  484573 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:17.185433  484573 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:17.185454  484573 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 12:11:17.185478  484573 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:17.185495  484573 main.go:141] libmachine: Making call to close driver server
I0812 12:11:17.185507  484573 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:17.185739  484573 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:17.185780  484573 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:17.185802  484573 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-719946 image ls --format yaml --alsologtostderr:
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: b308035875219719219caabe870f24229e01e2f608e833dfff3b413332a1c6a6
repoDigests:
- localhost/minikube-local-cache-test@sha256:2fd00a104f1b17c48fab975245d1d48a955143cd81b0dac88a7068df50673dca
repoTags:
- localhost/minikube-local-cache-test:functional-719946
size: "3330"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-719946
size: "4943877"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-719946 image ls --format yaml --alsologtostderr:
I0812 12:11:12.909610  484353 out.go:291] Setting OutFile to fd 1 ...
I0812 12:11:12.909759  484353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:12.909773  484353 out.go:304] Setting ErrFile to fd 2...
I0812 12:11:12.909779  484353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:12.910059  484353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
I0812 12:11:12.910883  484353 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:12.911049  484353 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:12.911701  484353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:12.911784  484353 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:12.929445  484353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
I0812 12:11:12.930063  484353 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:12.930743  484353 main.go:141] libmachine: Using API Version  1
I0812 12:11:12.930789  484353 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:12.931171  484353 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:12.931423  484353 main.go:141] libmachine: (functional-719946) Calling .GetState
I0812 12:11:12.933530  484353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:12.933578  484353 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:12.951192  484353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
I0812 12:11:12.951674  484353 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:12.952272  484353 main.go:141] libmachine: Using API Version  1
I0812 12:11:12.952305  484353 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:12.952641  484353 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:12.952866  484353 main.go:141] libmachine: (functional-719946) Calling .DriverName
I0812 12:11:12.953157  484353 ssh_runner.go:195] Run: systemctl --version
I0812 12:11:12.953192  484353 main.go:141] libmachine: (functional-719946) Calling .GetSSHHostname
I0812 12:11:12.956910  484353 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:12.957478  484353 main.go:141] libmachine: (functional-719946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d3:09", ip: ""} in network mk-functional-719946: {Iface:virbr1 ExpiryTime:2024-08-12 13:07:53 +0000 UTC Type:0 Mac:52:54:00:cd:d3:09 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:functional-719946 Clientid:01:52:54:00:cd:d3:09}
I0812 12:11:12.957521  484353 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined IP address 192.168.39.119 and MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:12.957724  484353 main.go:141] libmachine: (functional-719946) Calling .GetSSHPort
I0812 12:11:12.957984  484353 main.go:141] libmachine: (functional-719946) Calling .GetSSHKeyPath
I0812 12:11:12.958193  484353 main.go:141] libmachine: (functional-719946) Calling .GetSSHUsername
I0812 12:11:12.958377  484353 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/functional-719946/id_rsa Username:docker}
I0812 12:11:13.049241  484353 ssh_runner.go:195] Run: sudo crictl images --output json
I0812 12:11:13.130424  484353 main.go:141] libmachine: Making call to close driver server
I0812 12:11:13.130442  484353 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:13.130757  484353 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:13.130769  484353 main.go:141] libmachine: (functional-719946) DBG | Closing plugin on server side
I0812 12:11:13.130777  484353 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 12:11:13.130797  484353 main.go:141] libmachine: Making call to close driver server
I0812 12:11:13.130805  484353 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:13.131065  484353 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:13.131087  484353 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh pgrep buildkitd: exit status 1 (209.921422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image build -t localhost/my-image:functional-719946 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image build -t localhost/my-image:functional-719946 testdata/build --alsologtostderr: (3.271076704s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-719946 image build -t localhost/my-image:functional-719946 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2f970520749
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-719946
--> 15d858f7fc1
Successfully tagged localhost/my-image:functional-719946
15d858f7fc111a711406c8845d4229ca1d055500c2454803bb3a59e6201c1e92
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-719946 image build -t localhost/my-image:functional-719946 testdata/build --alsologtostderr:
I0812 12:11:13.400260  484422 out.go:291] Setting OutFile to fd 1 ...
I0812 12:11:13.400385  484422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:13.400397  484422 out.go:304] Setting ErrFile to fd 2...
I0812 12:11:13.400402  484422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0812 12:11:13.400720  484422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
I0812 12:11:13.401646  484422 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:13.402360  484422 config.go:182] Loaded profile config "functional-719946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0812 12:11:13.402837  484422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:13.402891  484422 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:13.420300  484422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
I0812 12:11:13.420857  484422 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:13.421628  484422 main.go:141] libmachine: Using API Version  1
I0812 12:11:13.421663  484422 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:13.422047  484422 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:13.422230  484422 main.go:141] libmachine: (functional-719946) Calling .GetState
I0812 12:11:13.424138  484422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0812 12:11:13.424187  484422 main.go:141] libmachine: Launching plugin server for driver kvm2
I0812 12:11:13.441218  484422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
I0812 12:11:13.441660  484422 main.go:141] libmachine: () Calling .GetVersion
I0812 12:11:13.442132  484422 main.go:141] libmachine: Using API Version  1
I0812 12:11:13.442157  484422 main.go:141] libmachine: () Calling .SetConfigRaw
I0812 12:11:13.442502  484422 main.go:141] libmachine: () Calling .GetMachineName
I0812 12:11:13.442655  484422 main.go:141] libmachine: (functional-719946) Calling .DriverName
I0812 12:11:13.442871  484422 ssh_runner.go:195] Run: systemctl --version
I0812 12:11:13.442907  484422 main.go:141] libmachine: (functional-719946) Calling .GetSSHHostname
I0812 12:11:13.446318  484422 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:13.446889  484422 main.go:141] libmachine: (functional-719946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d3:09", ip: ""} in network mk-functional-719946: {Iface:virbr1 ExpiryTime:2024-08-12 13:07:53 +0000 UTC Type:0 Mac:52:54:00:cd:d3:09 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:functional-719946 Clientid:01:52:54:00:cd:d3:09}
I0812 12:11:13.446946  484422 main.go:141] libmachine: (functional-719946) DBG | domain functional-719946 has defined IP address 192.168.39.119 and MAC address 52:54:00:cd:d3:09 in network mk-functional-719946
I0812 12:11:13.447072  484422 main.go:141] libmachine: (functional-719946) Calling .GetSSHPort
I0812 12:11:13.447263  484422 main.go:141] libmachine: (functional-719946) Calling .GetSSHKeyPath
I0812 12:11:13.447431  484422 main.go:141] libmachine: (functional-719946) Calling .GetSSHUsername
I0812 12:11:13.447571  484422 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/functional-719946/id_rsa Username:docker}
I0812 12:11:13.549753  484422 build_images.go:161] Building image from path: /tmp/build.31769543.tar
I0812 12:11:13.549839  484422 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0812 12:11:13.564355  484422 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.31769543.tar
I0812 12:11:13.572325  484422 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.31769543.tar: stat -c "%s %y" /var/lib/minikube/build/build.31769543.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.31769543.tar': No such file or directory
I0812 12:11:13.572367  484422 ssh_runner.go:362] scp /tmp/build.31769543.tar --> /var/lib/minikube/build/build.31769543.tar (3072 bytes)
I0812 12:11:13.604064  484422 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.31769543
I0812 12:11:13.616106  484422 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.31769543 -xf /var/lib/minikube/build/build.31769543.tar
I0812 12:11:13.627057  484422 crio.go:315] Building image: /var/lib/minikube/build/build.31769543
I0812 12:11:13.627146  484422 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-719946 /var/lib/minikube/build/build.31769543 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0812 12:11:16.572183  484422 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-719946 /var/lib/minikube/build/build.31769543 --cgroup-manager=cgroupfs: (2.944998989s)
I0812 12:11:16.572287  484422 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.31769543
I0812 12:11:16.600290  484422 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.31769543.tar
I0812 12:11:16.613204  484422 build_images.go:217] Built localhost/my-image:functional-719946 from /tmp/build.31769543.tar
I0812 12:11:16.613250  484422 build_images.go:133] succeeded building to: functional-719946
I0812 12:11:16.613257  484422 build_images.go:134] failed building to: 
I0812 12:11:16.613357  484422 main.go:141] libmachine: Making call to close driver server
I0812 12:11:16.613379  484422 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:16.613742  484422 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:16.613763  484422 main.go:141] libmachine: Making call to close connection to plugin binary
I0812 12:11:16.613775  484422 main.go:141] libmachine: Making call to close driver server
I0812 12:11:16.613783  484422 main.go:141] libmachine: (functional-719946) Calling .Close
I0812 12:11:16.614072  484422 main.go:141] libmachine: Successfully made call to close driver server
I0812 12:11:16.614092  484422 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.948767767s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-719946
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image load --daemon kicbase/echo-server:functional-719946 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image load --daemon kicbase/echo-server:functional-719946 --alsologtostderr: (1.246655446s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image load --daemon kicbase/echo-server:functional-719946 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-719946
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image load --daemon kicbase/echo-server:functional-719946 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image load --daemon kicbase/echo-server:functional-719946 --alsologtostderr: (3.396068623s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image save kicbase/echo-server:functional-719946 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image save kicbase/echo-server:functional-719946 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.721040258s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image rm kicbase/echo-server:functional-719946 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.227141013s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-719946
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 image save --daemon kicbase/echo-server:functional-719946 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-719946 image save --daemon kicbase/echo-server:functional-719946 --alsologtostderr: (1.571074585s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-719946
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-719946 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-719946 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-bwvqm" [ae7481bc-d755-4847-863b-bb0ce6ec68fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-bwvqm" [ae7481bc-d755-4847-863b-bb0ce6ec68fa] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005074103s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "238.361288ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.267583ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "245.706816ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.963453ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdany-port4152724233/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723464669080498923" to /tmp/TestFunctionalparallelMountCmdany-port4152724233/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723464669080498923" to /tmp/TestFunctionalparallelMountCmdany-port4152724233/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723464669080498923" to /tmp/TestFunctionalparallelMountCmdany-port4152724233/001/test-1723464669080498923
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.016223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 12 12:11 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 12 12:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 12 12:11 test-1723464669080498923
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh cat /mount-9p/test-1723464669080498923
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-719946 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5430a4ba-4e7a-4981-89e2-bc9556601019] Pending
helpers_test.go:344: "busybox-mount" [5430a4ba-4e7a-4981-89e2-bc9556601019] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5430a4ba-4e7a-4981-89e2-bc9556601019] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5430a4ba-4e7a-4981-89e2-bc9556601019] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00572734s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-719946 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdany-port4152724233/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service list -o json
functional_test.go:1494: Took "462.471034ms" to run "out/minikube-linux-amd64 -p functional-719946 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.119:32229
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.119:32229
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdspecific-port3821590468/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.026488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdspecific-port3821590468/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "sudo umount -f /mount-9p": exit status 1 (193.700055ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-719946 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdspecific-port3821590468/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T" /mount1: exit status 1 (236.674651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-719946 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-719946 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-719946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095901897/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-719946
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-719946
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-719946
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (271.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220134 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 12:15:44.616113  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.622311  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.632642  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.652990  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.693413  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.773835  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:44.934879  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:45.255537  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:45.895756  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:47.176467  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:49.737548  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:15:54.857845  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-220134 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m30.74971706s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (271.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- rollout status deployment/busybox
E0812 12:16:05.098488  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-220134 -- rollout status deployment/busybox: (4.75991034s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-82gr9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-9hhl4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-qh8vv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-82gr9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-9hhl4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-qh8vv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-82gr9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-9hhl4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-qh8vv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-82gr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-82gr9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-9hhl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-9hhl4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-qh8vv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220134 -- exec busybox-fc5497c4f-qh8vv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-220134 -v=7 --alsologtostderr
E0812 12:16:25.578826  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:17:06.539426  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-220134 -v=7 --alsologtostderr: (58.120005431s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-220134 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp testdata/cp-test.txt ha-220134:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134:/home/docker/cp-test.txt ha-220134-m02:/home/docker/cp-test_ha-220134_ha-220134-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test_ha-220134_ha-220134-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134:/home/docker/cp-test.txt ha-220134-m03:/home/docker/cp-test_ha-220134_ha-220134-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test_ha-220134_ha-220134-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134:/home/docker/cp-test.txt ha-220134-m04:/home/docker/cp-test_ha-220134_ha-220134-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test_ha-220134_ha-220134-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp testdata/cp-test.txt ha-220134-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m02:/home/docker/cp-test.txt ha-220134:/home/docker/cp-test_ha-220134-m02_ha-220134.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test_ha-220134-m02_ha-220134.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m02:/home/docker/cp-test.txt ha-220134-m03:/home/docker/cp-test_ha-220134-m02_ha-220134-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test_ha-220134-m02_ha-220134-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m02:/home/docker/cp-test.txt ha-220134-m04:/home/docker/cp-test_ha-220134-m02_ha-220134-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test_ha-220134-m02_ha-220134-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp testdata/cp-test.txt ha-220134-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt ha-220134:/home/docker/cp-test_ha-220134-m03_ha-220134.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test_ha-220134-m03_ha-220134.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt ha-220134-m02:/home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test_ha-220134-m03_ha-220134-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m03:/home/docker/cp-test.txt ha-220134-m04:/home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test_ha-220134-m03_ha-220134-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp testdata/cp-test.txt ha-220134-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile182589956/001/cp-test_ha-220134-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt ha-220134:/home/docker/cp-test_ha-220134-m04_ha-220134.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134 "sudo cat /home/docker/cp-test_ha-220134-m04_ha-220134.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt ha-220134-m02:/home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m02 "sudo cat /home/docker/cp-test_ha-220134-m04_ha-220134-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 cp ha-220134-m04:/home/docker/cp-test.txt ha-220134-m03:/home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 ssh -n ha-220134-m03 "sudo cat /home/docker/cp-test_ha-220134-m04_ha-220134-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.471681342s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-220134 node delete m03 -v=7 --alsologtostderr: (16.237591698s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (379.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220134 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 12:30:44.619379  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 12:32:07.663639  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-220134 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m18.781322415s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (379.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-220134 --control-plane -v=7 --alsologtostderr
E0812 12:35:44.616240  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-220134 --control-plane -v=7 --alsologtostderr: (1m18.020897204s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-220134 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-609373 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-609373 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.096128051s)
--- PASS: TestJSONOutput/start/Command (97.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-609373 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-609373 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-609373 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-609373 --output=json --user=testUser: (7.357862404s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-945056 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-945056 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.722102ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13f8ed89-45ea-4410-9c5b-2201b437f2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-945056] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c86524e7-068e-4889-89df-9a37338d51ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19411"}}
	{"specversion":"1.0","id":"b369fedd-3ec1-470d-a87c-d0c94aaec267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"34cea485-8d37-4df5-b43c-70f24b937f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig"}}
	{"specversion":"1.0","id":"b180fe94-7ed9-485c-9876-7e0e85f6f59e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube"}}
	{"specversion":"1.0","id":"71290044-438f-4e8d-8a7a-90b8739415dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"83b461bc-8049-4d68-b171-8231d36e69a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"98d9f85b-5bbe-413a-a1d2-87a8fcc0de30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-945056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-945056
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-908134 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-908134 --driver=kvm2  --container-runtime=crio: (45.50138858s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-911359 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-911359 --driver=kvm2  --container-runtime=crio: (42.816398033s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-908134
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-911359
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-911359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-911359
helpers_test.go:175: Cleaning up "first-908134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-908134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-908134: (1.021069854s)
--- PASS: TestMinikubeProfile (91.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-971761 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0812 12:40:44.619820  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-971761 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.062090083s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971761 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971761 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-988302 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-988302 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.25885502s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-971761 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-988302
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-988302: (1.272627546s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-988302
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-988302: (21.979274293s)
--- PASS: TestMountStart/serial/RestartStopped (22.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-988302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-276573 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-276573 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.865684664s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-276573 -- rollout status deployment/busybox: (4.784376191s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-9sww5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-q48jv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-9sww5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-q48jv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-9sww5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-q48jv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-9sww5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-9sww5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-q48jv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-276573 -- exec busybox-fc5497c4f-q48jv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-276573 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-276573 -v 3 --alsologtostderr: (52.549657833s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-276573 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp testdata/cp-test.txt multinode-276573:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573:/home/docker/cp-test.txt multinode-276573-m02:/home/docker/cp-test_multinode-276573_multinode-276573-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test_multinode-276573_multinode-276573-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573:/home/docker/cp-test.txt multinode-276573-m03:/home/docker/cp-test_multinode-276573_multinode-276573-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test_multinode-276573_multinode-276573-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp testdata/cp-test.txt multinode-276573-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt multinode-276573:/home/docker/cp-test_multinode-276573-m02_multinode-276573.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test_multinode-276573-m02_multinode-276573.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m02:/home/docker/cp-test.txt multinode-276573-m03:/home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test_multinode-276573-m02_multinode-276573-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp testdata/cp-test.txt multinode-276573-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile584427708/001/cp-test_multinode-276573-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt multinode-276573:/home/docker/cp-test_multinode-276573-m03_multinode-276573.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573 "sudo cat /home/docker/cp-test_multinode-276573-m03_multinode-276573.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 cp multinode-276573-m03:/home/docker/cp-test.txt multinode-276573-m02:/home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 ssh -n multinode-276573-m02 "sudo cat /home/docker/cp-test_multinode-276573-m03_multinode-276573-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-276573 node stop m03: (1.482361818s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-276573 status: exit status 7 (435.972884ms)

                                                
                                                
-- stdout --
	multinode-276573
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276573-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276573-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr: exit status 7 (440.096259ms)

                                                
                                                
-- stdout --
	multinode-276573
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-276573-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-276573-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0812 12:44:57.290437  503230 out.go:291] Setting OutFile to fd 1 ...
	I0812 12:44:57.290681  503230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:44:57.290690  503230 out.go:304] Setting ErrFile to fd 2...
	I0812 12:44:57.290694  503230 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0812 12:44:57.290890  503230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19411-463103/.minikube/bin
	I0812 12:44:57.291061  503230 out.go:298] Setting JSON to false
	I0812 12:44:57.291087  503230 mustload.go:65] Loading cluster: multinode-276573
	I0812 12:44:57.291211  503230 notify.go:220] Checking for updates...
	I0812 12:44:57.291633  503230 config.go:182] Loaded profile config "multinode-276573": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0812 12:44:57.291658  503230 status.go:255] checking status of multinode-276573 ...
	I0812 12:44:57.292132  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.292190  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.314195  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0812 12:44:57.314653  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.315338  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.315361  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.315842  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.316119  503230 main.go:141] libmachine: (multinode-276573) Calling .GetState
	I0812 12:44:57.317988  503230 status.go:330] multinode-276573 host status = "Running" (err=<nil>)
	I0812 12:44:57.318009  503230 host.go:66] Checking if "multinode-276573" exists ...
	I0812 12:44:57.318421  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.318470  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.334518  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0812 12:44:57.335026  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.335527  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.335561  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.335868  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.336087  503230 main.go:141] libmachine: (multinode-276573) Calling .GetIP
	I0812 12:44:57.338792  503230 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:44:57.339202  503230 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:44:57.339232  503230 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:44:57.339340  503230 host.go:66] Checking if "multinode-276573" exists ...
	I0812 12:44:57.339765  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.339814  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.355651  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0812 12:44:57.356045  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.356490  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.356510  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.356822  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.357002  503230 main.go:141] libmachine: (multinode-276573) Calling .DriverName
	I0812 12:44:57.357170  503230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:44:57.357213  503230 main.go:141] libmachine: (multinode-276573) Calling .GetSSHHostname
	I0812 12:44:57.359993  503230 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:44:57.360420  503230 main.go:141] libmachine: (multinode-276573) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:69:c6", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:41:58 +0000 UTC Type:0 Mac:52:54:00:ae:69:c6 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:multinode-276573 Clientid:01:52:54:00:ae:69:c6}
	I0812 12:44:57.360450  503230 main.go:141] libmachine: (multinode-276573) DBG | domain multinode-276573 has defined IP address 192.168.39.187 and MAC address 52:54:00:ae:69:c6 in network mk-multinode-276573
	I0812 12:44:57.360652  503230 main.go:141] libmachine: (multinode-276573) Calling .GetSSHPort
	I0812 12:44:57.360849  503230 main.go:141] libmachine: (multinode-276573) Calling .GetSSHKeyPath
	I0812 12:44:57.361005  503230 main.go:141] libmachine: (multinode-276573) Calling .GetSSHUsername
	I0812 12:44:57.361198  503230 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573/id_rsa Username:docker}
	I0812 12:44:57.442977  503230 ssh_runner.go:195] Run: systemctl --version
	I0812 12:44:57.450206  503230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:44:57.465168  503230 kubeconfig.go:125] found "multinode-276573" server: "https://192.168.39.187:8443"
	I0812 12:44:57.465202  503230 api_server.go:166] Checking apiserver status ...
	I0812 12:44:57.465246  503230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 12:44:57.478631  503230 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0812 12:44:57.489730  503230 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0812 12:44:57.489803  503230 ssh_runner.go:195] Run: ls
	I0812 12:44:57.494288  503230 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I0812 12:44:57.498535  503230 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I0812 12:44:57.498560  503230 status.go:422] multinode-276573 apiserver status = Running (err=<nil>)
	I0812 12:44:57.498570  503230 status.go:257] multinode-276573 status: &{Name:multinode-276573 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:44:57.498599  503230 status.go:255] checking status of multinode-276573-m02 ...
	I0812 12:44:57.498896  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.498930  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.516462  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0812 12:44:57.517008  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.517524  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.517549  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.517869  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.518087  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetState
	I0812 12:44:57.519529  503230 status.go:330] multinode-276573-m02 host status = "Running" (err=<nil>)
	I0812 12:44:57.519558  503230 host.go:66] Checking if "multinode-276573-m02" exists ...
	I0812 12:44:57.519862  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.519901  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.535736  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0812 12:44:57.536135  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.536630  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.536652  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.537006  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.537190  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetIP
	I0812 12:44:57.539776  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | domain multinode-276573-m02 has defined MAC address 52:54:00:4e:e0:15 in network mk-multinode-276573
	I0812 12:44:57.540244  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:e0:15", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:43:10 +0000 UTC Type:0 Mac:52:54:00:4e:e0:15 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-276573-m02 Clientid:01:52:54:00:4e:e0:15}
	I0812 12:44:57.540277  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | domain multinode-276573-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:4e:e0:15 in network mk-multinode-276573
	I0812 12:44:57.540439  503230 host.go:66] Checking if "multinode-276573-m02" exists ...
	I0812 12:44:57.540757  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.540801  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.556842  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I0812 12:44:57.557300  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.557776  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.557799  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.558102  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.558318  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .DriverName
	I0812 12:44:57.558554  503230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 12:44:57.558572  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetSSHHostname
	I0812 12:44:57.561507  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | domain multinode-276573-m02 has defined MAC address 52:54:00:4e:e0:15 in network mk-multinode-276573
	I0812 12:44:57.562056  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:e0:15", ip: ""} in network mk-multinode-276573: {Iface:virbr1 ExpiryTime:2024-08-12 13:43:10 +0000 UTC Type:0 Mac:52:54:00:4e:e0:15 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-276573-m02 Clientid:01:52:54:00:4e:e0:15}
	I0812 12:44:57.562084  503230 main.go:141] libmachine: (multinode-276573-m02) DBG | domain multinode-276573-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:4e:e0:15 in network mk-multinode-276573
	I0812 12:44:57.562263  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetSSHPort
	I0812 12:44:57.562470  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetSSHKeyPath
	I0812 12:44:57.562665  503230 main.go:141] libmachine: (multinode-276573-m02) Calling .GetSSHUsername
	I0812 12:44:57.562846  503230 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19411-463103/.minikube/machines/multinode-276573-m02/id_rsa Username:docker}
	I0812 12:44:57.645071  503230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0812 12:44:57.661783  503230 status.go:257] multinode-276573-m02 status: &{Name:multinode-276573-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0812 12:44:57.661820  503230 status.go:255] checking status of multinode-276573-m03 ...
	I0812 12:44:57.662144  503230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0812 12:44:57.662192  503230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0812 12:44:57.678729  503230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0812 12:44:57.679198  503230 main.go:141] libmachine: () Calling .GetVersion
	I0812 12:44:57.679647  503230 main.go:141] libmachine: Using API Version  1
	I0812 12:44:57.679673  503230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0812 12:44:57.680033  503230 main.go:141] libmachine: () Calling .GetMachineName
	I0812 12:44:57.680244  503230 main.go:141] libmachine: (multinode-276573-m03) Calling .GetState
	I0812 12:44:57.681744  503230 status.go:330] multinode-276573-m03 host status = "Stopped" (err=<nil>)
	I0812 12:44:57.681762  503230 status.go:343] host is not running, skipping remaining checks
	I0812 12:44:57.681770  503230 status.go:257] multinode-276573-m03 status: &{Name:multinode-276573-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-276573 node start m03 -v=7 --alsologtostderr: (39.726678798s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-276573 node delete m03: (1.920285931s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-276573 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0812 12:55:44.615577  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-276573 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.85928354s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-276573 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-276573
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-276573-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-276573-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.786317ms)

                                                
                                                
-- stdout --
	* [multinode-276573-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-276573-m02' is duplicated with machine name 'multinode-276573-m02' in profile 'multinode-276573'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-276573-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-276573-m03 --driver=kvm2  --container-runtime=crio: (44.786023051s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-276573
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-276573: exit status 80 (219.103815ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-276573 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-276573-m03 already exists in multinode-276573-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-276573-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-276573-m03: (1.001498796s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.12s)

                                                
                                    
x
+
TestScheduledStopUnix (111.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-646257 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-646257 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.38552599s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646257 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-646257 -n scheduled-stop-646257
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646257 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646257 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646257 -n scheduled-stop-646257
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646257
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646257 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646257
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-646257: exit status 7 (65.329843ms)

                                                
                                                
-- stdout --
	scheduled-stop-646257
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646257 -n scheduled-stop-646257
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646257 -n scheduled-stop-646257: exit status 7 (65.289414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-646257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-646257
--- PASS: TestScheduledStopUnix (111.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (208.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.648210377 start -p running-upgrade-563509 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.648210377 start -p running-upgrade-563509 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.829777077s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-563509 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-563509 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.001539837s)
helpers_test.go:175: Cleaning up "running-upgrade-563509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-563509
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-563509: (1.226464874s)
--- PASS: TestRunningBinaryUpgrade (208.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.70384ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-395896] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19411
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19411-463103/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19411-463103/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-395896 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-395896 --driver=kvm2  --container-runtime=crio: (1m41.38249523s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-395896 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0812 13:05:27.666381  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
E0812 13:05:44.617265  470375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19411-463103/.minikube/profiles/functional-719946/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.514708475s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-395896 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-395896 status -o json: exit status 2 (227.777667ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-395896","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-395896
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-395896 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.151730328s)
--- PASS: TestNoKubernetes/serial/Start (28.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-395896 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-395896 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.748365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.38853016s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.566407667s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-395896
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-395896: (1.465475976s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-395896 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-395896 --driver=kvm2  --container-runtime=crio: (22.716553703s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-395896 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-395896 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.890833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3037800106 start -p stopped-upgrade-421827 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3037800106 start -p stopped-upgrade-421827 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.07364673s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3037800106 -p stopped-upgrade-421827 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3037800106 -p stopped-upgrade-421827 stop: (2.141423365s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-421827 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-421827 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.147260439s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.36s)

                                                
                                    
x
+
TestPause/serial/Start (101.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-752920 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-752920 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m41.342236303s)
--- PASS: TestPause/serial/Start (101.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-421827
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-752920 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-752920 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.874367255s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (66.89s)

                                                
                                    
x
+
TestPause/serial/Pause (1.23s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-752920 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-752920 --alsologtostderr -v=5: (1.231072926s)
--- PASS: TestPause/serial/Pause (1.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-752920 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-752920 --output=json --layout=cluster: exit status 2 (282.273146ms)

                                                
                                                
-- stdout --
	{"Name":"pause-752920","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-752920","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-752920 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-752920 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-752920 --alsologtostderr -v=5: (1.003299538s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-752920 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-752920 --alsologtostderr -v=5: (1.059954393s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    

Test skip (35/221)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard